mirror of
https://github.com/saltstack/salt.git
synced 2025-04-17 10:10:20 +00:00
Merge branch 'develop' into release-note-formatting
This commit is contained in:
commit
61280cc0c3
43 changed files with 6267 additions and 131 deletions
|
@ -46,6 +46,21 @@ noon PST so the Stormpath external authentication module has been removed.
|
|||
|
||||
https://stormpath.com/oktaplusstormpath
|
||||
|
||||
New Grains
|
||||
----------
|
||||
|
||||
New core grains have been added to expose any storage inititator setting.
|
||||
|
||||
The new grains added are:
|
||||
|
||||
* ``fc_wwn``: Show all fibre channel world wide port names for a host
|
||||
* ``iscsi_iqn``: Show the iSCSI IQN name for a host
|
||||
|
||||
New Modules
|
||||
-----------
|
||||
|
||||
- :mod:`salt.modules.purefa <salt.modules.purefa>`
|
||||
|
||||
New NaCl Renderer
|
||||
-----------------
|
||||
|
||||
|
@ -110,6 +125,194 @@ file. For example:
|
|||
|
||||
These commands will run in sequence **before** the bootstrap script is executed.
|
||||
|
||||
New pillar/master_tops module called saltclass
|
||||
----------------------------------------------
|
||||
|
||||
This module clones the behaviour of reclass (http://reclass.pantsfullofunix.net/), without the need of an external app, and add several features to improve flexibility.
|
||||
Saltclass lets you define your nodes from simple ``yaml`` files (``.yml``) through hierarchical class inheritance with the possibility to override pillars down the tree.
|
||||
|
||||
**Features**
|
||||
|
||||
- Define your nodes through hierarchical class inheritance
|
||||
- Reuse your reclass datas with minimal modifications
|
||||
- applications => states
|
||||
- parameters => pillars
|
||||
- Use Jinja templating in your yaml definitions
|
||||
- Access to the following Salt objects in Jinja
|
||||
- ``__opts__``
|
||||
- ``__salt__``
|
||||
- ``__grains__``
|
||||
- ``__pillars__``
|
||||
- ``minion_id``
|
||||
- Chose how to merge or override your lists using ^ character (see examples)
|
||||
- Expand variables ${} with possibility to escape them if needed \${} (see examples)
|
||||
- Ignores missing node/class and will simply return empty without breaking the pillar module completely - will be logged
|
||||
|
||||
An example subset of datas is available here: http://git.mauras.ch/salt/saltclass/src/master/examples
|
||||
|
||||
========================== ===========
|
||||
Terms usable in yaml files Description
|
||||
========================== ===========
|
||||
classes A list of classes that will be processed in order
|
||||
states A list of states that will be returned by master_tops function
|
||||
pillars A yaml dictionnary that will be returned by the ext_pillar function
|
||||
environment Node saltenv that will be used by master_tops
|
||||
========================== ===========
|
||||
|
||||
A class consists of:
|
||||
|
||||
- zero or more parent classes
|
||||
- zero or more states
|
||||
- any number of pillars
|
||||
|
||||
A child class can override pillars from a parent class.
|
||||
A node definition is a class in itself with an added ``environment`` parameter for ``saltenv`` definition.
|
||||
|
||||
**class names**
|
||||
|
||||
Class names mimic salt way of defining states and pillar files.
|
||||
This means that ``default.users`` class name will correspond to one of these:
|
||||
|
||||
- ``<saltclass_path>/classes/default/users.yml``
|
||||
- ``<saltclass_path>/classes/default/users/init.yml``
|
||||
|
||||
**Saltclass tree**
|
||||
|
||||
A saltclass tree would look like this:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
<saltclass_path>
|
||||
├── classes
|
||||
│ ├── app
|
||||
│ │ ├── borgbackup.yml
|
||||
│ │ └── ssh
|
||||
│ │ └── server.yml
|
||||
│ ├── default
|
||||
│ │ ├── init.yml
|
||||
│ │ ├── motd.yml
|
||||
│ │ └── users.yml
|
||||
│ ├── roles
|
||||
│ │ ├── app.yml
|
||||
│ │ └── nginx
|
||||
│ │ ├── init.yml
|
||||
│ │ └── server.yml
|
||||
│ └── subsidiaries
|
||||
│ ├── gnv.yml
|
||||
│ ├── qls.yml
|
||||
│ └── zrh.yml
|
||||
└── nodes
|
||||
├── geneva
|
||||
│ └── gnv.node1.yml
|
||||
├── lausanne
|
||||
│ ├── qls.node1.yml
|
||||
│ └── qls.node2.yml
|
||||
├── node127.yml
|
||||
└── zurich
|
||||
├── zrh.node1.yml
|
||||
├── zrh.node2.yml
|
||||
└── zrh.node3.yml
|
||||
|
||||
**Examples**
|
||||
|
||||
``<saltclass_path>/nodes/lausanne/qls.node1.yml``
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
environment: base
|
||||
|
||||
classes:
|
||||
{% for class in ['default'] %}
|
||||
- {{ class }}
|
||||
{% endfor %}
|
||||
- subsidiaries.{{ __grains__['id'].split('.')[0] }}
|
||||
|
||||
``<saltclass_path>/classes/default/init.yml``
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
classes:
|
||||
- default.users
|
||||
- default.motd
|
||||
|
||||
states:
|
||||
- openssh
|
||||
|
||||
pillars:
|
||||
default:
|
||||
network:
|
||||
dns:
|
||||
srv1: 192.168.0.1
|
||||
srv2: 192.168.0.2
|
||||
domain: example.com
|
||||
ntp:
|
||||
srv1: 192.168.10.10
|
||||
srv2: 192.168.10.20
|
||||
|
||||
``<saltclass_path>/classes/subsidiaries/gnv.yml``
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
pillars:
|
||||
default:
|
||||
network:
|
||||
sub: Geneva
|
||||
dns:
|
||||
srv1: 10.20.0.1
|
||||
srv2: 10.20.0.2
|
||||
srv3: 192.168.1.1
|
||||
domain: gnv.example.com
|
||||
users:
|
||||
adm1:
|
||||
uid: 1210
|
||||
gid: 1210
|
||||
gecos: 'Super user admin1'
|
||||
homedir: /srv/app/adm1
|
||||
adm3:
|
||||
uid: 1203
|
||||
gid: 1203
|
||||
gecos: 'Super user adm
|
||||
|
||||
Variable expansions:
|
||||
|
||||
Escaped variables are rendered as is - ``${test}``
|
||||
|
||||
Missing variables are rendered as is - ``${net:dns:srv2}``
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
pillars:
|
||||
app:
|
||||
config:
|
||||
dns:
|
||||
srv1: ${default:network:dns:srv1}
|
||||
srv2: ${net:dns:srv2}
|
||||
uri: https://application.domain/call?\${test}
|
||||
prod_parameters:
|
||||
- p1
|
||||
- p2
|
||||
- p3
|
||||
pkg:
|
||||
- app-core
|
||||
- app-backend
|
||||
|
||||
List override:
|
||||
|
||||
Not using ``^`` as the first entry will simply merge the lists
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
pillars:
|
||||
app:
|
||||
pkg:
|
||||
- ^
|
||||
- app-frontend
|
||||
|
||||
|
||||
**Known limitation**
|
||||
|
||||
Currently you can't have both a variable and an escaped variable in the same string as the escaped one will not be correctly rendered - '\${xx}' will stay as is instead of being rendered as '${xx}'
|
||||
|
||||
Newer PyWinRM Versions
|
||||
----------------------
|
||||
|
||||
|
|
219
salt/config/schemas/esxi.py
Normal file
219
salt/config/schemas/esxi.py
Normal file
|
@ -0,0 +1,219 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
:codeauthor: :email:`Alexandru Bleotu (alexandru.bleotu@morganstanley.com)`
|
||||
|
||||
|
||||
salt.config.schemas.esxi
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
ESXi host configuration schemas
|
||||
'''
|
||||
|
||||
# Import Python libs
|
||||
from __future__ import absolute_import
|
||||
|
||||
# Import Salt libs
|
||||
from salt.utils.schema import (DefinitionsSchema,
|
||||
Schema,
|
||||
ComplexSchemaItem,
|
||||
ArrayItem,
|
||||
IntegerItem,
|
||||
BooleanItem,
|
||||
StringItem,
|
||||
OneOfItem)
|
||||
|
||||
|
||||
class VMwareScsiAddressItem(StringItem):
|
||||
pattern = r'vmhba\d+:C\d+:T\d+:L\d+'
|
||||
|
||||
|
||||
class DiskGroupDiskScsiAddressItem(ComplexSchemaItem):
|
||||
'''
|
||||
Schema item of a ESXi host disk group containing disk SCSI addresses
|
||||
'''
|
||||
|
||||
title = 'Diskgroup Disk Scsi Address Item'
|
||||
description = 'ESXi host diskgroup item containing disk SCSI addresses'
|
||||
|
||||
cache_scsi_addr = VMwareScsiAddressItem(
|
||||
title='Cache Disk Scsi Address',
|
||||
description='Specifies the SCSI address of the cache disk',
|
||||
required=True)
|
||||
|
||||
capacity_scsi_addrs = ArrayItem(
|
||||
title='Capacity Scsi Addresses',
|
||||
description='Array with the SCSI addresses of the capacity disks',
|
||||
items=VMwareScsiAddressItem(),
|
||||
min_items=1)
|
||||
|
||||
|
||||
class DiskGroupDiskIdItem(ComplexSchemaItem):
|
||||
'''
|
||||
Schema item of a ESXi host disk group containg disk ids
|
||||
'''
|
||||
|
||||
title = 'Diskgroup Disk Id Item'
|
||||
description = 'ESXi host diskgroup item containing disk ids'
|
||||
|
||||
cache_id = StringItem(
|
||||
title='Cache Disk Id',
|
||||
description='Specifies the id of the cache disk',
|
||||
pattern=r'[^\s]+')
|
||||
|
||||
capacity_ids = ArrayItem(
|
||||
title='Capacity Disk Ids',
|
||||
description='Array with the ids of the capacity disks',
|
||||
items=StringItem(pattern=r'[^\s]+'),
|
||||
min_items=1)
|
||||
|
||||
|
||||
class DiskGroupsDiskScsiAddressSchema(DefinitionsSchema):
|
||||
'''
|
||||
Schema of ESXi host diskgroups containing disk SCSI addresses
|
||||
'''
|
||||
|
||||
title = 'Diskgroups Disk Scsi Address Schema'
|
||||
description = 'ESXi host diskgroup schema containing disk SCSI addresses'
|
||||
diskgroups = ArrayItem(
|
||||
title='Diskgroups',
|
||||
description='List of diskgroups in an ESXi host',
|
||||
min_items=1,
|
||||
items=DiskGroupDiskScsiAddressItem(),
|
||||
required=True)
|
||||
erase_disks = BooleanItem(
|
||||
title='Erase Diskgroup Disks',
|
||||
required=True)
|
||||
|
||||
|
||||
class DiskGroupsDiskIdSchema(DefinitionsSchema):
|
||||
'''
|
||||
Schema of ESXi host diskgroups containing disk ids
|
||||
'''
|
||||
|
||||
title = 'Diskgroups Disk Id Schema'
|
||||
description = 'ESXi host diskgroup schema containing disk ids'
|
||||
diskgroups = ArrayItem(
|
||||
title='DiskGroups',
|
||||
description='List of disk groups in an ESXi host',
|
||||
min_items=1,
|
||||
items=DiskGroupDiskIdItem(),
|
||||
required=True)
|
||||
|
||||
|
||||
class VmfsDatastoreDiskIdItem(ComplexSchemaItem):
|
||||
'''
|
||||
Schema item of a VMFS datastore referencing a backing disk id
|
||||
'''
|
||||
|
||||
title = 'VMFS Datastore Disk Id Item'
|
||||
description = 'VMFS datastore item referencing a backing disk id'
|
||||
name = StringItem(
|
||||
title='Name',
|
||||
description='Specifies the name of the VMFS datastore',
|
||||
required=True)
|
||||
backing_disk_id = StringItem(
|
||||
title='Backing Disk Id',
|
||||
description=('Specifies the id of the disk backing the VMFS '
|
||||
'datastore'),
|
||||
pattern=r'[^\s]+',
|
||||
required=True)
|
||||
vmfs_version = IntegerItem(
|
||||
title='VMFS Version',
|
||||
description='VMFS version',
|
||||
enum=[1, 2, 3, 5])
|
||||
|
||||
|
||||
class VmfsDatastoreDiskScsiAddressItem(ComplexSchemaItem):
|
||||
'''
|
||||
Schema item of a VMFS datastore referencing a backing disk SCSI address
|
||||
'''
|
||||
|
||||
title = 'VMFS Datastore Disk Scsi Address Item'
|
||||
description = 'VMFS datastore item referencing a backing disk SCSI address'
|
||||
name = StringItem(
|
||||
title='Name',
|
||||
description='Specifies the name of the VMFS datastore',
|
||||
required=True)
|
||||
backing_disk_scsi_addr = VMwareScsiAddressItem(
|
||||
title='Backing Disk Scsi Address',
|
||||
description=('Specifies the SCSI address of the disk backing the VMFS '
|
||||
'datastore'),
|
||||
required=True)
|
||||
vmfs_version = IntegerItem(
|
||||
title='VMFS Version',
|
||||
description='VMFS version',
|
||||
enum=[1, 2, 3, 5])
|
||||
|
||||
|
||||
class VmfsDatastoreSchema(DefinitionsSchema):
|
||||
'''
|
||||
Schema of a VMFS datastore
|
||||
'''
|
||||
|
||||
title = 'VMFS Datastore Schema'
|
||||
description = 'Schema of a VMFS datastore'
|
||||
datastore = OneOfItem(
|
||||
items=[VmfsDatastoreDiskScsiAddressItem(),
|
||||
VmfsDatastoreDiskIdItem()],
|
||||
required=True)
|
||||
|
||||
|
||||
class HostCacheSchema(DefinitionsSchema):
|
||||
'''
|
||||
Schema of ESXi host cache
|
||||
'''
|
||||
|
||||
title = 'Host Cache Schema'
|
||||
description = 'Schema of the ESXi host cache'
|
||||
enabled = BooleanItem(
|
||||
title='Enabled',
|
||||
required=True)
|
||||
datastore = VmfsDatastoreDiskScsiAddressItem(required=True)
|
||||
swap_size = StringItem(
|
||||
title='Host cache swap size (in GB or %)',
|
||||
pattern=r'(\d+GiB)|(([0-9]|([1-9][0-9])|100)%)',
|
||||
required=True)
|
||||
erase_backing_disk = BooleanItem(
|
||||
title='Erase Backup Disk',
|
||||
required=True)
|
||||
|
||||
|
||||
class SimpleHostCacheSchema(Schema):
|
||||
'''
|
||||
Simplified Schema of ESXi host cache
|
||||
'''
|
||||
|
||||
title = 'Simple Host Cache Schema'
|
||||
description = 'Simplified schema of the ESXi host cache'
|
||||
enabled = BooleanItem(
|
||||
title='Enabled',
|
||||
required=True)
|
||||
datastore_name = StringItem(title='Datastore Name',
|
||||
required=True)
|
||||
swap_size_MiB = IntegerItem(title='Host cache swap size in MiB',
|
||||
minimum=1)
|
||||
|
||||
|
||||
class EsxiProxySchema(Schema):
|
||||
'''
|
||||
Schema of the esxi proxy input
|
||||
'''
|
||||
|
||||
title = 'Esxi Proxy Schema'
|
||||
description = 'Esxi proxy schema'
|
||||
additional_properties = False
|
||||
proxytype = StringItem(required=True,
|
||||
enum=['esxi'])
|
||||
host = StringItem(pattern=r'[^\s]+') # Used when connecting directly
|
||||
vcenter = StringItem(pattern=r'[^\s]+') # Used when connecting via a vCenter
|
||||
esxi_host = StringItem()
|
||||
username = StringItem()
|
||||
passwords = ArrayItem(min_items=1,
|
||||
items=StringItem(),
|
||||
unique_items=True)
|
||||
mechanism = StringItem(enum=['userpass', 'sspi'])
|
||||
# TODO Should be changed when anyOf is supported for schemas
|
||||
domain = StringItem()
|
||||
principal = StringItem()
|
||||
protocol = StringItem()
|
||||
port = IntegerItem(minimum=1)
|
|
@ -14,6 +14,8 @@ from __future__ import absolute_import
|
|||
|
||||
# Import Salt libs
|
||||
from salt.utils.schema import (Schema,
|
||||
ArrayItem,
|
||||
IntegerItem,
|
||||
StringItem)
|
||||
|
||||
|
||||
|
@ -31,3 +33,25 @@ class VCenterEntitySchema(Schema):
|
|||
vcenter = StringItem(title='vCenter',
|
||||
description='Specifies the vcenter hostname',
|
||||
required=True)
|
||||
|
||||
|
||||
class VCenterProxySchema(Schema):
|
||||
'''
|
||||
Schema for the configuration for the proxy to connect to a VCenter.
|
||||
'''
|
||||
title = 'VCenter Proxy Connection Schema'
|
||||
description = 'Schema that describes the connection to a VCenter'
|
||||
additional_properties = False
|
||||
proxytype = StringItem(required=True,
|
||||
enum=['vcenter'])
|
||||
vcenter = StringItem(required=True, pattern=r'[^\s]+')
|
||||
mechanism = StringItem(required=True, enum=['userpass', 'sspi'])
|
||||
username = StringItem()
|
||||
passwords = ArrayItem(min_items=1,
|
||||
items=StringItem(),
|
||||
unique_items=True)
|
||||
|
||||
domain = StringItem()
|
||||
principal = StringItem(default='host')
|
||||
protocol = StringItem(default='https')
|
||||
port = IntegerItem(minimum=1)
|
||||
|
|
|
@ -442,6 +442,18 @@ class VMwareObjectRetrievalError(VMwareSaltError):
|
|||
'''
|
||||
|
||||
|
||||
class VMwareObjectExistsError(VMwareSaltError):
|
||||
'''
|
||||
Used when a VMware object exists
|
||||
'''
|
||||
|
||||
|
||||
class VMwareObjectNotFoundError(VMwareSaltError):
|
||||
'''
|
||||
Used when a VMware object was not found
|
||||
'''
|
||||
|
||||
|
||||
class VMwareApiError(VMwareSaltError):
|
||||
'''
|
||||
Used when representing a generic VMware API error
|
||||
|
|
|
@ -56,3 +56,7 @@ def cmd(command, *args, **kwargs):
|
|||
proxy_cmd = proxy_prefix + '.ch_config'
|
||||
|
||||
return __proxy__[proxy_cmd](command, *args, **kwargs)
|
||||
|
||||
|
||||
def get_details():
|
||||
return __proxy__['esxi.get_details']()
|
||||
|
|
|
@ -68,9 +68,7 @@ class _Puppet(object):
|
|||
self.vardir = 'C:\\ProgramData\\PuppetLabs\\puppet\\var'
|
||||
self.rundir = 'C:\\ProgramData\\PuppetLabs\\puppet\\run'
|
||||
self.confdir = 'C:\\ProgramData\\PuppetLabs\\puppet\\etc'
|
||||
self.useshell = True
|
||||
else:
|
||||
self.useshell = False
|
||||
self.puppet_version = __salt__['cmd.run']('puppet --version')
|
||||
if 'Enterprise' in self.puppet_version:
|
||||
self.vardir = '/var/opt/lib/pe-puppet'
|
||||
|
@ -106,7 +104,10 @@ class _Puppet(object):
|
|||
' --{0} {1}'.format(k, v) for k, v in six.iteritems(self.kwargs)]
|
||||
)
|
||||
|
||||
return '{0} {1}'.format(cmd, args)
|
||||
# Ensure that the puppet call will return 0 in case of exit code 2
|
||||
if salt.utils.platform.is_windows():
|
||||
return 'cmd /V:ON /c {0} {1} ^& if !ERRORLEVEL! EQU 2 (EXIT 0) ELSE (EXIT /B)'.format(cmd, args)
|
||||
return '({0} {1}) || test $? -eq 2'.format(cmd, args)
|
||||
|
||||
def arguments(self, args=None):
|
||||
'''
|
||||
|
@ -169,12 +170,7 @@ def run(*args, **kwargs):
|
|||
|
||||
puppet.kwargs.update(salt.utils.args.clean_kwargs(**kwargs))
|
||||
|
||||
ret = __salt__['cmd.run_all'](repr(puppet), python_shell=puppet.useshell)
|
||||
if ret['retcode'] in [0, 2]:
|
||||
ret['retcode'] = 0
|
||||
else:
|
||||
ret['retcode'] = 1
|
||||
|
||||
ret = __salt__['cmd.run_all'](repr(puppet), python_shell=True)
|
||||
return ret
|
||||
|
||||
|
||||
|
|
|
@ -31,6 +31,7 @@ Installation Prerequisites
|
|||
three methods.
|
||||
|
||||
1) From the minion config
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
pure_tags:
|
||||
|
|
|
@ -851,7 +851,7 @@ def list_policies(vhost="/", runas=None):
|
|||
return ret
|
||||
|
||||
|
||||
def set_policy(vhost, name, pattern, definition, priority=None, runas=None):
|
||||
def set_policy(vhost, name, pattern, definition, priority=None, apply_to=None, runas=None):
|
||||
'''
|
||||
Set a policy based on rabbitmqctl set_policy.
|
||||
|
||||
|
@ -874,6 +874,8 @@ def set_policy(vhost, name, pattern, definition, priority=None, runas=None):
|
|||
cmd = [RABBITMQCTL, 'set_policy', '-p', vhost]
|
||||
if priority:
|
||||
cmd.extend(['--priority', priority])
|
||||
if apply_to:
|
||||
cmd.extend(['--apply-to', apply_to])
|
||||
cmd.extend([name, pattern, definition])
|
||||
res = __salt__['cmd.run_all'](cmd, runas=runas, python_shell=False)
|
||||
log.debug('Set policy: {0}'.format(res['stdout']))
|
||||
|
|
29
salt/modules/vcenter.py
Normal file
29
salt/modules/vcenter.py
Normal file
|
@ -0,0 +1,29 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Module used to access the vcenter proxy connection methods
|
||||
'''
|
||||
from __future__ import absolute_import
|
||||
|
||||
# Import python libs
|
||||
import logging
|
||||
import salt.utils
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
__proxyenabled__ = ['vcenter']
|
||||
# Define the module's virtual name
|
||||
__virtualname__ = 'vcenter'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only work on proxy
|
||||
'''
|
||||
if salt.utils.is_proxy():
|
||||
return __virtualname__
|
||||
return False
|
||||
|
||||
|
||||
def get_details():
|
||||
return __proxy__['vcenter.get_details']()
|
File diff suppressed because it is too large
Load diff
|
@ -343,14 +343,15 @@ def ext_pillar(minion_id,
|
|||
if minion_id in match:
|
||||
ngroup_dir = os.path.join(
|
||||
nodegroups_dir, str(nodegroup))
|
||||
ngroup_pillar.update(
|
||||
ngroup_pillar = salt.utils.dictupdate.merge(ngroup_pillar,
|
||||
_construct_pillar(ngroup_dir,
|
||||
follow_dir_links,
|
||||
keep_newline,
|
||||
render_default,
|
||||
renderer_blacklist,
|
||||
renderer_whitelist,
|
||||
template)
|
||||
template),
|
||||
strategy='recurse'
|
||||
)
|
||||
else:
|
||||
if debug is True:
|
||||
|
|
|
@ -398,6 +398,13 @@ def ext_pillar(minion_id, pillar, *repos): # pylint: disable=unused-argument
|
|||
False
|
||||
)
|
||||
for pillar_dir, env in six.iteritems(git_pillar.pillar_dirs):
|
||||
# Map env if env == '__env__' before checking the env value
|
||||
if env == '__env__':
|
||||
env = opts.get('pillarenv') \
|
||||
or opts.get('environment') \
|
||||
or opts.get('git_pillar_base')
|
||||
log.debug('__env__ maps to %s', env)
|
||||
|
||||
# If pillarenv is set, only grab pillars with that match pillarenv
|
||||
if opts['pillarenv'] and env != opts['pillarenv']:
|
||||
log.debug(
|
||||
|
@ -418,12 +425,6 @@ def ext_pillar(minion_id, pillar, *repos): # pylint: disable=unused-argument
|
|||
'env \'%s\'', pillar_dir, env
|
||||
)
|
||||
|
||||
if env == '__env__':
|
||||
env = opts.get('pillarenv') \
|
||||
or opts.get('environment') \
|
||||
or opts.get('git_pillar_base')
|
||||
log.debug('__env__ maps to %s', env)
|
||||
|
||||
pillar_roots = [pillar_dir]
|
||||
|
||||
if __opts__['git_pillar_includes']:
|
||||
|
|
62
salt/pillar/saltclass.py
Normal file
62
salt/pillar/saltclass.py
Normal file
|
@ -0,0 +1,62 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
SaltClass Pillar Module
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
ext_pillar:
|
||||
- saltclass:
|
||||
- path: /srv/saltclass
|
||||
|
||||
'''
|
||||
|
||||
# import python libs
|
||||
from __future__ import absolute_import
|
||||
import salt.utils.saltclass as sc
|
||||
import logging
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
This module has no external dependencies
|
||||
'''
|
||||
return True
|
||||
|
||||
|
||||
def ext_pillar(minion_id, pillar, *args, **kwargs):
|
||||
'''
|
||||
Node definitions path will be retrieved from args - or set to default -
|
||||
then added to 'salt_data' dict that is passed to the 'get_pillars' function.
|
||||
'salt_data' dict is a convenient way to pass all the required datas to the function
|
||||
It contains:
|
||||
- __opts__
|
||||
- __salt__
|
||||
- __grains__
|
||||
- __pillar__
|
||||
- minion_id
|
||||
- path
|
||||
|
||||
If successfull the function will return a pillar dict for minion_id
|
||||
'''
|
||||
# If path has not been set, make a default
|
||||
for i in args:
|
||||
if 'path' not in i:
|
||||
path = '/srv/saltclass'
|
||||
args[i]['path'] = path
|
||||
log.warning('path variable unset, using default: {0}'.format(path))
|
||||
else:
|
||||
path = i['path']
|
||||
|
||||
# Create a dict that will contain our salt dicts to pass it to reclass
|
||||
salt_data = {
|
||||
'__opts__': __opts__,
|
||||
'__salt__': __salt__,
|
||||
'__grains__': __grains__,
|
||||
'__pillar__': pillar,
|
||||
'minion_id': minion_id,
|
||||
'path': path
|
||||
}
|
||||
|
||||
return sc.get_pillars(minion_id, salt_data)
|
|
@ -273,13 +273,22 @@ for standing up an ESXi host from scratch.
|
|||
# Import Python Libs
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
import os
|
||||
|
||||
# Import Salt Libs
|
||||
from salt.exceptions import SaltSystemExit
|
||||
from salt.exceptions import SaltSystemExit, InvalidConfigError
|
||||
from salt.config.schemas.esxi import EsxiProxySchema
|
||||
from salt.utils.dictupdate import merge
|
||||
|
||||
# This must be present or the Salt loader won't load this module.
|
||||
__proxyenabled__ = ['esxi']
|
||||
|
||||
# External libraries
|
||||
try:
|
||||
import jsonschema
|
||||
HAS_JSONSCHEMA = True
|
||||
except ImportError:
|
||||
HAS_JSONSCHEMA = False
|
||||
|
||||
# Variables are scoped to this module so we can have persistent data
|
||||
# across calls to fns in here.
|
||||
|
@ -288,7 +297,6 @@ DETAILS = {}
|
|||
|
||||
# Set up logging
|
||||
log = logging.getLogger(__file__)
|
||||
|
||||
# Define the module's virtual name
|
||||
__virtualname__ = 'esxi'
|
||||
|
||||
|
@ -297,7 +305,7 @@ def __virtual__():
|
|||
'''
|
||||
Only load if the ESXi execution module is available.
|
||||
'''
|
||||
if 'vsphere.system_info' in __salt__:
|
||||
if HAS_JSONSCHEMA:
|
||||
return __virtualname__
|
||||
|
||||
return False, 'The ESXi Proxy Minion module did not load.'
|
||||
|
@ -309,32 +317,104 @@ def init(opts):
|
|||
ESXi devices, the host, login credentials, and, if configured,
|
||||
the protocol and port are cached.
|
||||
'''
|
||||
if 'host' not in opts['proxy']:
|
||||
log.critical('No \'host\' key found in pillar for this proxy.')
|
||||
return False
|
||||
if 'username' not in opts['proxy']:
|
||||
log.critical('No \'username\' key found in pillar for this proxy.')
|
||||
return False
|
||||
if 'passwords' not in opts['proxy']:
|
||||
log.critical('No \'passwords\' key found in pillar for this proxy.')
|
||||
return False
|
||||
|
||||
host = opts['proxy']['host']
|
||||
|
||||
# Get the correct login details
|
||||
log.debug('Initting esxi proxy module in process \'{}\''
|
||||
''.format(os.getpid()))
|
||||
log.debug('Validating esxi proxy input')
|
||||
schema = EsxiProxySchema.serialize()
|
||||
log.trace('esxi_proxy_schema = {}'.format(schema))
|
||||
proxy_conf = merge(opts.get('proxy', {}), __pillar__.get('proxy', {}))
|
||||
log.trace('proxy_conf = {0}'.format(proxy_conf))
|
||||
try:
|
||||
username, password = find_credentials(host)
|
||||
except SaltSystemExit as err:
|
||||
log.critical('Error: {0}'.format(err))
|
||||
return False
|
||||
jsonschema.validate(proxy_conf, schema)
|
||||
except jsonschema.exceptions.ValidationError as exc:
|
||||
raise InvalidConfigError(exc)
|
||||
|
||||
# Set configuration details
|
||||
DETAILS['host'] = host
|
||||
DETAILS['username'] = username
|
||||
DETAILS['password'] = password
|
||||
DETAILS['protocol'] = opts['proxy'].get('protocol', 'https')
|
||||
DETAILS['port'] = opts['proxy'].get('port', '443')
|
||||
DETAILS['credstore'] = opts['proxy'].get('credstore')
|
||||
DETAILS['proxytype'] = proxy_conf['proxytype']
|
||||
if ('host' not in proxy_conf) and ('vcenter' not in proxy_conf):
|
||||
log.critical('Neither \'host\' nor \'vcenter\' keys found in pillar '
|
||||
'for this proxy.')
|
||||
return False
|
||||
if 'host' in proxy_conf:
|
||||
# We have started the proxy by connecting directly to the host
|
||||
if 'username' not in proxy_conf:
|
||||
log.critical('No \'username\' key found in pillar for this proxy.')
|
||||
return False
|
||||
if 'passwords' not in proxy_conf:
|
||||
log.critical('No \'passwords\' key found in pillar for this proxy.')
|
||||
return False
|
||||
host = proxy_conf['host']
|
||||
|
||||
# Get the correct login details
|
||||
try:
|
||||
username, password = find_credentials(host)
|
||||
except SaltSystemExit as err:
|
||||
log.critical('Error: {0}'.format(err))
|
||||
return False
|
||||
|
||||
# Set configuration details
|
||||
DETAILS['host'] = host
|
||||
DETAILS['username'] = username
|
||||
DETAILS['password'] = password
|
||||
DETAILS['protocol'] = proxy_conf.get('protocol')
|
||||
DETAILS['port'] = proxy_conf.get('port')
|
||||
return True
|
||||
|
||||
if 'vcenter' in proxy_conf:
|
||||
vcenter = proxy_conf['vcenter']
|
||||
if not proxy_conf.get('esxi_host'):
|
||||
log.critical('No \'esxi_host\' key found in pillar for this proxy.')
|
||||
DETAILS['esxi_host'] = proxy_conf['esxi_host']
|
||||
# We have started the proxy by connecting via the vCenter
|
||||
if 'mechanism' not in proxy_conf:
|
||||
log.critical('No \'mechanism\' key found in pillar for this proxy.')
|
||||
return False
|
||||
mechanism = proxy_conf['mechanism']
|
||||
# Save mandatory fields in cache
|
||||
for key in ('vcenter', 'mechanism'):
|
||||
DETAILS[key] = proxy_conf[key]
|
||||
|
||||
if mechanism == 'userpass':
|
||||
if 'username' not in proxy_conf:
|
||||
log.critical('No \'username\' key found in pillar for this '
|
||||
'proxy.')
|
||||
return False
|
||||
if 'passwords' not in proxy_conf and \
|
||||
len(proxy_conf['passwords']) > 0:
|
||||
|
||||
log.critical('Mechanism is set to \'userpass\' , but no '
|
||||
'\'passwords\' key found in pillar for this '
|
||||
'proxy.')
|
||||
return False
|
||||
for key in ('username', 'passwords'):
|
||||
DETAILS[key] = proxy_conf[key]
|
||||
elif mechanism == 'sspi':
|
||||
if 'domain' not in proxy_conf:
|
||||
log.critical('Mechanism is set to \'sspi\' , but no '
|
||||
'\'domain\' key found in pillar for this proxy.')
|
||||
return False
|
||||
if 'principal' not in proxy_conf:
|
||||
log.critical('Mechanism is set to \'sspi\' , but no '
|
||||
'\'principal\' key found in pillar for this '
|
||||
'proxy.')
|
||||
return False
|
||||
for key in ('domain', 'principal'):
|
||||
DETAILS[key] = proxy_conf[key]
|
||||
|
||||
if mechanism == 'userpass':
|
||||
# Get the correct login details
|
||||
log.debug('Retrieving credentials and testing vCenter connection'
|
||||
' for mehchanism \'userpass\'')
|
||||
try:
|
||||
username, password = find_credentials(DETAILS['vcenter'])
|
||||
DETAILS['password'] = password
|
||||
except SaltSystemExit as err:
|
||||
log.critical('Error: {0}'.format(err))
|
||||
return False
|
||||
|
||||
# Save optional
|
||||
DETAILS['protocol'] = proxy_conf.get('protocol', 'https')
|
||||
DETAILS['port'] = proxy_conf.get('port', '443')
|
||||
DETAILS['credstore'] = proxy_conf.get('credstore')
|
||||
|
||||
|
||||
def grains():
|
||||
|
@ -358,8 +438,9 @@ def grains_refresh():
|
|||
|
||||
def ping():
|
||||
'''
|
||||
Check to see if the host is responding. Returns False if the host didn't
|
||||
respond, True otherwise.
|
||||
Returns True if connection is to be done via a vCenter (no connection is attempted).
|
||||
Check to see if the host is responding when connecting directly via an ESXi
|
||||
host.
|
||||
|
||||
CLI Example:
|
||||
|
||||
|
@ -367,15 +448,19 @@ def ping():
|
|||
|
||||
salt esxi-host test.ping
|
||||
'''
|
||||
# find_credentials(DETAILS['host'])
|
||||
try:
|
||||
__salt__['vsphere.system_info'](host=DETAILS['host'],
|
||||
username=DETAILS['username'],
|
||||
password=DETAILS['password'])
|
||||
except SaltSystemExit as err:
|
||||
log.warning(err)
|
||||
return False
|
||||
|
||||
if DETAILS.get('esxi_host'):
|
||||
return True
|
||||
else:
|
||||
# TODO Check connection if mechanism is SSPI
|
||||
if DETAILS['mechanism'] == 'userpass':
|
||||
find_credentials(DETAILS['host'])
|
||||
try:
|
||||
__salt__['vsphere.system_info'](host=DETAILS['host'],
|
||||
username=DETAILS['username'],
|
||||
password=DETAILS['password'])
|
||||
except SaltSystemExit as err:
|
||||
log.warning(err)
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
|
@ -461,3 +546,14 @@ def _grains(host, protocol=None, port=None):
|
|||
port=port)
|
||||
GRAINS_CACHE.update(ret)
|
||||
return GRAINS_CACHE
|
||||
|
||||
|
||||
def is_connected_via_vcenter():
|
||||
return True if 'vcenter' in DETAILS else False
|
||||
|
||||
|
||||
def get_details():
|
||||
'''
|
||||
Return the proxy details
|
||||
'''
|
||||
return DETAILS
|
||||
|
|
338
salt/proxy/vcenter.py
Normal file
338
salt/proxy/vcenter.py
Normal file
|
@ -0,0 +1,338 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Proxy Minion interface module for managing VMWare vCenters.
|
||||
|
||||
:codeauthor: :email:`Rod McKenzie (roderick.mckenzie@morganstanley.com)`
|
||||
:codeauthor: :email:`Alexandru Bleotu (alexandru.bleotu@morganstanley.com)`
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
- pyVmomi Python Module
|
||||
|
||||
pyVmomi
|
||||
-------
|
||||
|
||||
PyVmomi can be installed via pip:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install pyVmomi
|
||||
|
||||
.. note::
|
||||
|
||||
Version 6.0 of pyVmomi has some problems with SSL error handling on certain
|
||||
versions of Python. If using version 6.0 of pyVmomi, Python 2.6,
|
||||
Python 2.7.9, or newer must be present. This is due to an upstream dependency
|
||||
in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the
|
||||
version of Python is not in the supported range, you will need to install an
|
||||
earlier version of pyVmomi. See `Issue #29537`_ for more information.
|
||||
|
||||
.. _Issue #29537: https://github.com/saltstack/salt/issues/29537
|
||||
|
||||
Based on the note above, to install an earlier version of pyVmomi than the
|
||||
version currently listed in PyPi, run the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install pyVmomi==5.5.0.2014.1.1
|
||||
|
||||
The 5.5.0.2014.1.1 is a known stable version that this original ESXi State
|
||||
Module was developed against.
|
||||
|
||||
|
||||
Configuration
|
||||
=============
|
||||
To use this proxy module, please use on of the following configurations:
|
||||
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
proxy:
|
||||
proxytype: vcenter
|
||||
vcenter: <ip or dns name of parent vcenter>
|
||||
username: <vCenter username>
|
||||
mechanism: userpass
|
||||
passwords:
|
||||
- first_password
|
||||
- second_password
|
||||
- third_password
|
||||
|
||||
proxy:
|
||||
proxytype: vcenter
|
||||
vcenter: <ip or dns name of parent vcenter>
|
||||
username: <vCenter username>
|
||||
domain: <user domain>
|
||||
mechanism: sspi
|
||||
principal: <host kerberos principal>
|
||||
|
||||
proxytype
|
||||
^^^^^^^^^
|
||||
The ``proxytype`` key and value pair is critical, as it tells Salt which
|
||||
interface to load from the ``proxy`` directory in Salt's install hierarchy,
|
||||
or from ``/srv/salt/_proxy`` on the Salt Master (if you have created your
|
||||
own proxy module, for example). To use this Proxy Module, set this to
|
||||
``vcenter``.
|
||||
|
||||
vcenter
|
||||
^^^^^^^
|
||||
The location of the VMware vCenter server (host of ip). Required
|
||||
|
||||
username
|
||||
^^^^^^^^
|
||||
The username used to login to the vcenter, such as ``root``.
|
||||
Required only for userpass.
|
||||
|
||||
mechanism
|
||||
^^^^^^^^
|
||||
The mechanism used to connect to the vCenter server. Supported values are
|
||||
``userpass`` and ``sspi``. Required.
|
||||
|
||||
passwords
|
||||
^^^^^^^^^
|
||||
A list of passwords to be used to try and login to the vCenter server. At least
|
||||
one password in this list is required if mechanism is ``userpass``
|
||||
|
||||
The proxy integration will try the passwords listed in order.
|
||||
|
||||
domain
|
||||
^^^^^^
|
||||
User domain. Required if mechanism is ``sspi``
|
||||
|
||||
principal
|
||||
^^^^^^^^
|
||||
Kerberos principal. Rquired if mechanism is ``sspi``
|
||||
|
||||
protocol
|
||||
^^^^^^^^
|
||||
If the vCenter is not using the default protocol, set this value to an
|
||||
alternate protocol. Default is ``https``.
|
||||
|
||||
port
|
||||
^^^^
|
||||
If the ESXi host is not using the default port, set this value to an
|
||||
alternate port. Default is ``443``.
|
||||
|
||||
|
||||
Salt Proxy
|
||||
----------
|
||||
|
||||
After your pillar is in place, you can test the proxy. The proxy can run on
|
||||
any machine that has network connectivity to your Salt Master and to the
|
||||
vCenter server in the pillar. SaltStack recommends that the machine running the
|
||||
salt-proxy process also run a regular minion, though it is not strictly
|
||||
necessary.
|
||||
|
||||
On the machine that will run the proxy, make sure there is an ``/etc/salt/proxy``
|
||||
file with at least the following in it:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
master: <ip or hostname of salt-master>
|
||||
|
||||
You can then start the salt-proxy process with:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt-proxy --proxyid <id of the cluster>
|
||||
|
||||
You may want to add ``-l debug`` to run the above in the foreground in
|
||||
debug mode just to make sure everything is OK.
|
||||
|
||||
Next, accept the key for the proxy on your salt-master, just like you
|
||||
would for a regular minion:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt-key -a <id you gave the vcenter host>
|
||||
|
||||
You can confirm that the pillar data is in place for the proxy:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt <id> pillar.items
|
||||
|
||||
And now you should be able to ping the ESXi host to make sure it is
|
||||
responding:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt <id> test.ping
|
||||
|
||||
At this point you can execute one-off commands against the vcenter. For
|
||||
example, you can get if the proxy can actually connect to the vCenter:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt <id> vsphere.test_vcenter_connection
|
||||
|
||||
Note that you don't need to provide credentials or an ip/hostname. Salt
|
||||
knows to use the credentials you stored in Pillar.
|
||||
|
||||
It's important to understand how this particular proxy works.
|
||||
:mod:`Salt.modules.vsphere </ref/modules/all/salt.modules.vsphere>` is a
|
||||
standard Salt execution module.
|
||||
|
||||
If you pull up the docs for it you'll see
|
||||
that almost every function in the module takes credentials and a targets either
|
||||
a vcenter or a host. When credentials and a host aren't passed, Salt runs commands
|
||||
through ``pyVmomi`` against the local machine. If you wanted, you could run
|
||||
functions from this module on any host where an appropriate version of
|
||||
``pyVmomi`` is installed, and that host would reach out over the network
|
||||
and communicate with the ESXi host.
|
||||
'''
|
||||
|
||||
# Import Python Libs
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
import os
|
||||
|
||||
# Import Salt Libs
|
||||
import salt.exceptions
|
||||
from salt.config.schemas.vcenter import VCenterProxySchema
|
||||
from salt.utils.dictupdate import merge
|
||||
|
||||
# This must be present or the Salt loader won't load this module.
|
||||
__proxyenabled__ = ['vcenter']
|
||||
|
||||
# External libraries
|
||||
try:
|
||||
import jsonschema
|
||||
HAS_JSONSCHEMA = True
|
||||
except ImportError:
|
||||
HAS_JSONSCHEMA = False
|
||||
|
||||
# Variables are scoped to this module so we can have persistent data
|
||||
# across calls to fns in here.
|
||||
DETAILS = {}
|
||||
|
||||
|
||||
# Set up logging
|
||||
log = logging.getLogger(__name__)
|
||||
# Define the module's virtual name
|
||||
__virtualname__ = 'vcenter'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load if the vsphere execution module is available.
|
||||
'''
|
||||
if HAS_JSONSCHEMA:
|
||||
return __virtualname__
|
||||
|
||||
return False, 'The vcenter proxy module did not load.'
|
||||
|
||||
|
||||
def init(opts):
|
||||
'''
|
||||
This function gets called when the proxy starts up.
|
||||
For login the protocol and port are cached.
|
||||
'''
|
||||
log.info('Initting vcenter proxy module in process {0}'
|
||||
''.format(os.getpid()))
|
||||
log.trace('VCenter Proxy Validating vcenter proxy input')
|
||||
schema = VCenterProxySchema.serialize()
|
||||
log.trace('schema = {}'.format(schema))
|
||||
proxy_conf = merge(opts.get('proxy', {}), __pillar__.get('proxy', {}))
|
||||
log.trace('proxy_conf = {0}'.format(proxy_conf))
|
||||
try:
|
||||
jsonschema.validate(proxy_conf, schema)
|
||||
except jsonschema.exceptions.ValidationError as exc:
|
||||
raise salt.exceptions.InvalidConfigError(exc)
|
||||
|
||||
# Save mandatory fields in cache
|
||||
for key in ('vcenter', 'mechanism'):
|
||||
DETAILS[key] = proxy_conf[key]
|
||||
|
||||
# Additional validation
|
||||
if DETAILS['mechanism'] == 'userpass':
|
||||
if 'username' not in proxy_conf:
|
||||
raise salt.exceptions.InvalidConfigError(
|
||||
'Mechanism is set to \'userpass\' , but no '
|
||||
'\'username\' key found in proxy config')
|
||||
if 'passwords' not in proxy_conf:
|
||||
raise salt.exceptions.InvalidConfigError(
|
||||
'Mechanism is set to \'userpass\' , but no '
|
||||
'\'passwords\' key found in proxy config')
|
||||
for key in ('username', 'passwords'):
|
||||
DETAILS[key] = proxy_conf[key]
|
||||
else:
|
||||
if 'domain' not in proxy_conf:
|
||||
raise salt.exceptions.InvalidConfigError(
|
||||
'Mechanism is set to \'sspi\' , but no '
|
||||
'\'domain\' key found in proxy config')
|
||||
if 'principal' not in proxy_conf:
|
||||
raise salt.exceptions.InvalidConfigError(
|
||||
'Mechanism is set to \'sspi\' , but no '
|
||||
'\'principal\' key found in proxy config')
|
||||
for key in ('domain', 'principal'):
|
||||
DETAILS[key] = proxy_conf[key]
|
||||
|
||||
# Save optional
|
||||
DETAILS['protocol'] = proxy_conf.get('protocol')
|
||||
DETAILS['port'] = proxy_conf.get('port')
|
||||
|
||||
# Test connection
|
||||
if DETAILS['mechanism'] == 'userpass':
|
||||
# Get the correct login details
|
||||
log.info('Retrieving credentials and testing vCenter connection for '
|
||||
'mehchanism \'userpass\'')
|
||||
try:
|
||||
username, password = find_credentials()
|
||||
DETAILS['password'] = password
|
||||
except salt.exceptions.SaltSystemExit as err:
|
||||
log.critical('Error: {0}'.format(err))
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def ping():
|
||||
'''
|
||||
Returns True.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt vcenter test.ping
|
||||
'''
|
||||
return True
|
||||
|
||||
|
||||
def shutdown():
|
||||
'''
|
||||
Shutdown the connection to the proxy device. For this proxy,
|
||||
shutdown is a no-op.
|
||||
'''
|
||||
log.debug('VCenter proxy shutdown() called...')
|
||||
|
||||
|
||||
def find_credentials():
|
||||
'''
|
||||
Cycle through all the possible credentials and return the first one that
|
||||
works.
|
||||
'''
|
||||
|
||||
# if the username and password were already found don't fo though the
|
||||
# connection process again
|
||||
if 'username' in DETAILS and 'password' in DETAILS:
|
||||
return DETAILS['username'], DETAILS['password']
|
||||
|
||||
passwords = __pillar__['proxy']['passwords']
|
||||
for password in passwords:
|
||||
DETAILS['password'] = password
|
||||
if not __salt__['vsphere.test_vcenter_connection']():
|
||||
# We are unable to authenticate
|
||||
continue
|
||||
# If we have data returned from above, we've successfully authenticated.
|
||||
return DETAILS['username'], password
|
||||
# We've reached the end of the list without successfully authenticating.
|
||||
raise salt.exceptions.VMwareConnectionError('Cannot complete login due to '
|
||||
'incorrect credentials.')
|
||||
|
||||
|
||||
def get_details():
|
||||
'''
|
||||
Function that returns the cached details
|
||||
'''
|
||||
return DETAILS
|
|
@ -77,10 +77,25 @@ def serialize(obj, **options):
|
|||
raise SerializationError(error)
|
||||
|
||||
|
||||
class EncryptedString(str):
|
||||
|
||||
yaml_tag = u'!encrypted'
|
||||
|
||||
@staticmethod
|
||||
def yaml_constructor(loader, tag, node):
|
||||
return EncryptedString(loader.construct_scalar(node))
|
||||
|
||||
@staticmethod
|
||||
def yaml_dumper(dumper, data):
|
||||
return dumper.represent_scalar(EncryptedString.yaml_tag, data.__str__())
|
||||
|
||||
|
||||
class Loader(BaseLoader): # pylint: disable=W0232
|
||||
'''Overwrites Loader as not for pollute legacy Loader'''
|
||||
pass
|
||||
|
||||
|
||||
Loader.add_multi_constructor(EncryptedString.yaml_tag, EncryptedString.yaml_constructor)
|
||||
Loader.add_multi_constructor('tag:yaml.org,2002:null', Loader.construct_yaml_null)
|
||||
Loader.add_multi_constructor('tag:yaml.org,2002:bool', Loader.construct_yaml_bool)
|
||||
Loader.add_multi_constructor('tag:yaml.org,2002:int', Loader.construct_yaml_int)
|
||||
|
@ -100,6 +115,7 @@ class Dumper(BaseDumper): # pylint: disable=W0232
|
|||
'''Overwrites Dumper as not for pollute legacy Dumper'''
|
||||
pass
|
||||
|
||||
Dumper.add_multi_representer(EncryptedString, EncryptedString.yaml_dumper)
|
||||
Dumper.add_multi_representer(type(None), Dumper.represent_none)
|
||||
Dumper.add_multi_representer(str, Dumper.represent_str)
|
||||
if six.PY2:
|
||||
|
|
|
@ -90,20 +90,47 @@ ESXi Proxy Minion, please refer to the
|
|||
configuration examples, dependency installation instructions, how to run remote
|
||||
execution functions against ESXi hosts via a Salt Proxy Minion, and a larger state
|
||||
example.
|
||||
|
||||
'''
|
||||
# Import Python Libs
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
import sys
|
||||
import re
|
||||
|
||||
# Import Salt Libs
|
||||
from salt.ext import six
|
||||
import salt.utils.files
|
||||
from salt.exceptions import CommandExecutionError
|
||||
from salt.exceptions import CommandExecutionError, InvalidConfigError, \
|
||||
VMwareObjectRetrievalError, VMwareSaltError, VMwareApiError, \
|
||||
ArgumentValueError
|
||||
from salt.utils.decorators import depends
|
||||
from salt.config.schemas.esxi import DiskGroupsDiskScsiAddressSchema, \
|
||||
HostCacheSchema
|
||||
|
||||
# External libraries
|
||||
try:
|
||||
import jsonschema
|
||||
HAS_JSONSCHEMA = True
|
||||
except ImportError:
|
||||
HAS_JSONSCHEMA = False
|
||||
|
||||
# Get Logging Started
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
try:
|
||||
from pyVmomi import VmomiSupport
|
||||
|
||||
# We check the supported vim versions to infer the pyVmomi version
|
||||
if 'vim25/6.0' in VmomiSupport.versionMap and \
|
||||
sys.version_info > (2, 7) and sys.version_info < (2, 7, 9):
|
||||
|
||||
log.error('pyVmomi not loaded: Incompatible versions '
|
||||
'of Python. See Issue #29537.')
|
||||
raise ImportError()
|
||||
HAS_PYVMOMI = True
|
||||
except ImportError:
|
||||
HAS_PYVMOMI = False
|
||||
|
||||
|
||||
def __virtual__():
|
||||
return 'esxi.cmd' in __salt__
|
||||
|
@ -998,6 +1025,577 @@ def syslog_configured(name,
|
|||
return ret
|
||||
|
||||
|
||||
@depends(HAS_PYVMOMI)
|
||||
@depends(HAS_JSONSCHEMA)
|
||||
def diskgroups_configured(name, diskgroups, erase_disks=False):
|
||||
'''
|
||||
Configures the disk groups to use for vsan.
|
||||
|
||||
It will do the following:
|
||||
(1) checks for if all disks in the diskgroup spec exist and errors if they
|
||||
don't
|
||||
(2) creates diskgroups with the correct disk configurations if diskgroup
|
||||
(identified by the cache disk canonical name) doesn't exist
|
||||
(3) adds extra capacity disks to the existing diskgroup
|
||||
|
||||
State input example
|
||||
-------------------
|
||||
|
||||
.. code:: python
|
||||
|
||||
{
|
||||
'cache_scsi_addr': 'vmhba1:C0:T0:L0',
|
||||
'capacity_scsi_addrs': [
|
||||
'vmhba2:C0:T0:L0',
|
||||
'vmhba3:C0:T0:L0',
|
||||
'vmhba4:C0:T0:L0',
|
||||
]
|
||||
}
|
||||
|
||||
name
|
||||
Mandatory state name.
|
||||
|
||||
diskgroups
|
||||
Disk group representation containing scsi disk addresses.
|
||||
Scsi addresses are expected for disks in the diskgroup:
|
||||
|
||||
erase_disks
|
||||
Specifies whether to erase all partitions on all disks member of the
|
||||
disk group before the disk group is created. Default vaule is False.
|
||||
'''
|
||||
proxy_details = __salt__['esxi.get_details']()
|
||||
hostname = proxy_details['host'] if not proxy_details.get('vcenter') \
|
||||
else proxy_details['esxi_host']
|
||||
log.info('Running state {0} for host \'{1}\''.format(name, hostname))
|
||||
# Variable used to return the result of the invocation
|
||||
ret = {'name': name, 'result': None, 'changes': {},
|
||||
'pchanges': {}, 'comments': None}
|
||||
# Signals if errors have been encountered
|
||||
errors = False
|
||||
# Signals if changes are required
|
||||
changes = False
|
||||
comments = []
|
||||
diskgroup_changes = {}
|
||||
si = None
|
||||
try:
|
||||
log.trace('Validating diskgroups_configured input')
|
||||
schema = DiskGroupsDiskScsiAddressSchema.serialize()
|
||||
try:
|
||||
jsonschema.validate({'diskgroups': diskgroups,
|
||||
'erase_disks': erase_disks}, schema)
|
||||
except jsonschema.exceptions.ValidationError as exc:
|
||||
raise InvalidConfigError(exc)
|
||||
si = __salt__['vsphere.get_service_instance_via_proxy']()
|
||||
host_disks = __salt__['vsphere.list_disks'](service_instance=si)
|
||||
if not host_disks:
|
||||
raise VMwareObjectRetrievalError(
|
||||
'No disks retrieved from host \'{0}\''.format(hostname))
|
||||
scsi_addr_to_disk_map = {d['scsi_address']: d for d in host_disks}
|
||||
log.trace('scsi_addr_to_disk_map = {0}'.format(scsi_addr_to_disk_map))
|
||||
existing_diskgroups = \
|
||||
__salt__['vsphere.list_diskgroups'](service_instance=si)
|
||||
cache_disk_to_existing_diskgroup_map = \
|
||||
{dg['cache_disk']: dg for dg in existing_diskgroups}
|
||||
except CommandExecutionError as err:
|
||||
log.error('Error: {0}'.format(err))
|
||||
if si:
|
||||
__salt__['vsphere.disconnect'](si)
|
||||
ret.update({
|
||||
'result': False if not __opts__['test'] else None,
|
||||
'comment': str(err)})
|
||||
return ret
|
||||
|
||||
# Iterate through all of the disk groups
|
||||
for idx, dg in enumerate(diskgroups):
|
||||
# Check for cache disk
|
||||
if not dg['cache_scsi_addr'] in scsi_addr_to_disk_map:
|
||||
comments.append('No cache disk with scsi address \'{0}\' was '
|
||||
'found.'.format(dg['cache_scsi_addr']))
|
||||
log.error(comments[-1])
|
||||
errors = True
|
||||
continue
|
||||
|
||||
# Check for capacity disks
|
||||
cache_disk_id = scsi_addr_to_disk_map[dg['cache_scsi_addr']]['id']
|
||||
cache_disk_display = '{0} (id:{1})'.format(dg['cache_scsi_addr'],
|
||||
cache_disk_id)
|
||||
bad_scsi_addrs = []
|
||||
capacity_disk_ids = []
|
||||
capacity_disk_displays = []
|
||||
for scsi_addr in dg['capacity_scsi_addrs']:
|
||||
if scsi_addr not in scsi_addr_to_disk_map:
|
||||
bad_scsi_addrs.append(scsi_addr)
|
||||
continue
|
||||
capacity_disk_ids.append(scsi_addr_to_disk_map[scsi_addr]['id'])
|
||||
capacity_disk_displays.append(
|
||||
'{0} (id:{1})'.format(scsi_addr, capacity_disk_ids[-1]))
|
||||
if bad_scsi_addrs:
|
||||
comments.append('Error in diskgroup #{0}: capacity disks with '
|
||||
'scsi addresses {1} were not found.'
|
||||
''.format(idx,
|
||||
', '.join(['\'{0}\''.format(a)
|
||||
for a in bad_scsi_addrs])))
|
||||
log.error(comments[-1])
|
||||
errors = True
|
||||
continue
|
||||
|
||||
if not cache_disk_to_existing_diskgroup_map.get(cache_disk_id):
|
||||
# A new diskgroup needs to be created
|
||||
log.trace('erase_disks = {0}'.format(erase_disks))
|
||||
if erase_disks:
|
||||
if __opts__['test']:
|
||||
comments.append('State {0} will '
|
||||
'erase all disks of disk group #{1}; '
|
||||
'cache disk: \'{2}\', '
|
||||
'capacity disk(s): {3}.'
|
||||
''.format(name, idx, cache_disk_display,
|
||||
', '.join(
|
||||
['\'{}\''.format(a) for a in
|
||||
capacity_disk_displays])))
|
||||
else:
|
||||
# Erase disk group disks
|
||||
for disk_id in [cache_disk_id] + capacity_disk_ids:
|
||||
__salt__['vsphere.erase_disk_partitions'](
|
||||
disk_id=disk_id, service_instance=si)
|
||||
comments.append('Erased disks of diskgroup #{0}; '
|
||||
'cache disk: \'{1}\', capacity disk(s): '
|
||||
'{2}'.format(
|
||||
idx, cache_disk_display,
|
||||
', '.join(['\'{0}\''.format(a) for a in
|
||||
capacity_disk_displays])))
|
||||
log.info(comments[-1])
|
||||
|
||||
if __opts__['test']:
|
||||
comments.append('State {0} will create '
|
||||
'the disk group #{1}; cache disk: \'{2}\', '
|
||||
'capacity disk(s): {3}.'
|
||||
.format(name, idx, cache_disk_display,
|
||||
', '.join(['\'{0}\''.format(a) for a in
|
||||
capacity_disk_displays])))
|
||||
log.info(comments[-1])
|
||||
changes = True
|
||||
continue
|
||||
try:
|
||||
__salt__['vsphere.create_diskgroup'](cache_disk_id,
|
||||
capacity_disk_ids,
|
||||
safety_checks=False,
|
||||
service_instance=si)
|
||||
except VMwareSaltError as err:
|
||||
comments.append('Error creating disk group #{0}: '
|
||||
'{1}.'.format(idx, err))
|
||||
log.error(comments[-1])
|
||||
errors = True
|
||||
continue
|
||||
|
||||
comments.append('Created disk group #\'{0}\'.'.format(idx))
|
||||
log.info(comments[-1])
|
||||
diskgroup_changes[str(idx)] = \
|
||||
{'new': {'cache': cache_disk_display,
|
||||
'capacity': capacity_disk_displays}}
|
||||
changes = True
|
||||
continue
|
||||
|
||||
# The diskgroup exists; checking the capacity disks
|
||||
log.debug('Disk group #{0} exists. Checking capacity disks: '
|
||||
'{1}.'.format(idx, capacity_disk_displays))
|
||||
existing_diskgroup = \
|
||||
cache_disk_to_existing_diskgroup_map.get(cache_disk_id)
|
||||
existing_capacity_disk_displays = \
|
||||
['{0} (id:{1})'.format([d['scsi_address'] for d in host_disks
|
||||
if d['id'] == disk_id][0], disk_id)
|
||||
for disk_id in existing_diskgroup['capacity_disks']]
|
||||
# Populate added disks and removed disks and their displays
|
||||
added_capacity_disk_ids = []
|
||||
added_capacity_disk_displays = []
|
||||
removed_capacity_disk_ids = []
|
||||
removed_capacity_disk_displays = []
|
||||
for disk_id in capacity_disk_ids:
|
||||
if disk_id not in existing_diskgroup['capacity_disks']:
|
||||
disk_scsi_addr = [d['scsi_address'] for d in host_disks
|
||||
if d['id'] == disk_id][0]
|
||||
added_capacity_disk_ids.append(disk_id)
|
||||
added_capacity_disk_displays.append(
|
||||
'{0} (id:{1})'.format(disk_scsi_addr, disk_id))
|
||||
for disk_id in existing_diskgroup['capacity_disks']:
|
||||
if disk_id not in capacity_disk_ids:
|
||||
disk_scsi_addr = [d['scsi_address'] for d in host_disks
|
||||
if d['id'] == disk_id][0]
|
||||
removed_capacity_disk_ids.append(disk_id)
|
||||
removed_capacity_disk_displays.append(
|
||||
'{0} (id:{1})'.format(disk_scsi_addr, disk_id))
|
||||
|
||||
log.debug('Disk group #{0}: existing capacity disk ids: {1}; added '
|
||||
'capacity disk ids: {2}; removed capacity disk ids: {3}'
|
||||
''.format(idx, existing_capacity_disk_displays,
|
||||
added_capacity_disk_displays,
|
||||
removed_capacity_disk_displays))
|
||||
|
||||
#TODO revisit this when removing capacity disks is supported
|
||||
if removed_capacity_disk_ids:
|
||||
comments.append(
|
||||
'Error removing capacity disk(s) {0} from disk group #{1}; '
|
||||
'operation is not supported.'
|
||||
''.format(', '.join(['\'{0}\''.format(id) for id in
|
||||
removed_capacity_disk_displays]), idx))
|
||||
log.error(comments[-1])
|
||||
errors = True
|
||||
continue
|
||||
|
||||
if added_capacity_disk_ids:
|
||||
# Capacity disks need to be added to disk group
|
||||
|
||||
# Building a string representation of the capacity disks
|
||||
# that need to be added
|
||||
s = ', '.join(['\'{0}\''.format(id) for id in
|
||||
added_capacity_disk_displays])
|
||||
if __opts__['test']:
|
||||
comments.append('State {0} will add '
|
||||
'capacity disk(s) {1} to disk group #{2}.'
|
||||
''.format(name, s, idx))
|
||||
log.info(comments[-1])
|
||||
changes = True
|
||||
continue
|
||||
try:
|
||||
__salt__['vsphere.add_capacity_to_diskgroup'](
|
||||
cache_disk_id,
|
||||
added_capacity_disk_ids,
|
||||
safety_checks=False,
|
||||
service_instance=si)
|
||||
except VMwareSaltError as err:
|
||||
comments.append('Error adding capacity disk(s) {0} to '
|
||||
'disk group #{1}: {2}.'.format(s, idx, err))
|
||||
log.error(comments[-1])
|
||||
errors = True
|
||||
continue
|
||||
|
||||
com = ('Added capacity disk(s) {0} to disk group #{1}'
|
||||
''.format(s, idx))
|
||||
log.info(com)
|
||||
comments.append(com)
|
||||
diskgroup_changes[str(idx)] = \
|
||||
{'new': {'cache': cache_disk_display,
|
||||
'capacity': capacity_disk_displays},
|
||||
'old': {'cache': cache_disk_display,
|
||||
'capacity': existing_capacity_disk_displays}}
|
||||
changes = True
|
||||
continue
|
||||
|
||||
# No capacity needs to be added
|
||||
s = ('Disk group #{0} is correctly configured. Nothing to be done.'
|
||||
''.format(idx))
|
||||
log.info(s)
|
||||
comments.append(s)
|
||||
__salt__['vsphere.disconnect'](si)
|
||||
|
||||
#Build the final return message
|
||||
result = (True if not (changes or errors) else # no changes/errors
|
||||
None if __opts__['test'] else # running in test mode
|
||||
False if errors else True) # found errors; defaults to True
|
||||
ret.update({'result': result,
|
||||
'comment': '\n'.join(comments)})
|
||||
if changes:
|
||||
if __opts__['test']:
|
||||
ret['pchanges'] = diskgroup_changes
|
||||
elif changes:
|
||||
ret['changes'] = diskgroup_changes
|
||||
return ret
|
||||
|
||||
|
||||
@depends(HAS_PYVMOMI)
|
||||
@depends(HAS_JSONSCHEMA)
|
||||
def host_cache_configured(name, enabled, datastore, swap_size='100%',
|
||||
dedicated_backing_disk=False,
|
||||
erase_backing_disk=False):
|
||||
'''
|
||||
Configures the host cache used for swapping.
|
||||
|
||||
It will do the following:
|
||||
(1) checks if backing disk exists
|
||||
(2) creates the VMFS datastore if doesn't exist (datastore partition will
|
||||
be created and use the entire disk
|
||||
(3) raises an error if dedicated_backing_disk is True and partitions
|
||||
already exist on the backing disk
|
||||
(4) configures host_cache to use a portion of the datastore for caching
|
||||
(either a specific size or a percentage of the datastore)
|
||||
|
||||
State input examples
|
||||
--------------------
|
||||
|
||||
Percentage swap size (can't be 100%)
|
||||
|
||||
.. code:: python
|
||||
|
||||
{
|
||||
'enabled': true,
|
||||
'datastore': {
|
||||
'backing_disk_scsi_addr': 'vmhba0:C0:T0:L0',
|
||||
'vmfs_version': 5,
|
||||
'name': 'hostcache'
|
||||
}
|
||||
'dedicated_backing_disk': false
|
||||
'swap_size': '98%',
|
||||
}
|
||||
|
||||
|
||||
.. code:: python
|
||||
|
||||
Fixed sized swap size
|
||||
|
||||
{
|
||||
'enabled': true,
|
||||
'datastore': {
|
||||
'backing_disk_scsi_addr': 'vmhba0:C0:T0:L0',
|
||||
'vmfs_version': 5,
|
||||
'name': 'hostcache'
|
||||
}
|
||||
'dedicated_backing_disk': true
|
||||
'swap_size': '10GiB',
|
||||
}
|
||||
|
||||
name
|
||||
Mandatory state name.
|
||||
|
||||
enabled
|
||||
Specifies whether the host cache is enabled.
|
||||
|
||||
datastore
|
||||
Specifies the host cache datastore.
|
||||
|
||||
swap_size
|
||||
Specifies the size of the host cache swap. Can be a percentage or a
|
||||
value in GiB. Default value is ``100%``.
|
||||
|
||||
dedicated_backing_disk
|
||||
Specifies whether the backing disk is dedicated to the host cache which
|
||||
means it must have no other partitions. Default is False
|
||||
|
||||
erase_backing_disk
|
||||
Specifies whether to erase all partitions on the backing disk before
|
||||
the datastore is created. Default vaule is False.
|
||||
'''
|
||||
log.trace('enabled = {0}'.format(enabled))
|
||||
log.trace('datastore = {0}'.format(datastore))
|
||||
log.trace('swap_size = {0}'.format(swap_size))
|
||||
log.trace('erase_backing_disk = {0}'.format(erase_backing_disk))
|
||||
# Variable used to return the result of the invocation
|
||||
proxy_details = __salt__['esxi.get_details']()
|
||||
hostname = proxy_details['host'] if not proxy_details.get('vcenter') \
|
||||
else proxy_details['esxi_host']
|
||||
log.trace('hostname = {0}'.format(hostname))
|
||||
log.info('Running host_cache_swap_configured for host '
|
||||
'\'{0}\''.format(hostname))
|
||||
ret = {'name': hostname, 'comment': 'Default comments',
|
||||
'result': None, 'changes': {}, 'pchanges': {}}
|
||||
result = None if __opts__['test'] else True # We assume success
|
||||
needs_setting = False
|
||||
comments = []
|
||||
changes = {}
|
||||
si = None
|
||||
try:
|
||||
log.debug('Validating host_cache_configured input')
|
||||
schema = HostCacheSchema.serialize()
|
||||
try:
|
||||
jsonschema.validate({'enabled': enabled,
|
||||
'datastore': datastore,
|
||||
'swap_size': swap_size,
|
||||
'erase_backing_disk': erase_backing_disk},
|
||||
schema)
|
||||
except jsonschema.exceptions.ValidationError as exc:
|
||||
raise InvalidConfigError(exc)
|
||||
m = re.match(r'(\d+)(%|GiB)', swap_size)
|
||||
swap_size_value = int(m.group(1))
|
||||
swap_type = m.group(2)
|
||||
log.trace('swap_size_value = {0}; swap_type = {1}'.format(
|
||||
swap_size_value, swap_type))
|
||||
si = __salt__['vsphere.get_service_instance_via_proxy']()
|
||||
host_cache = __salt__['vsphere.get_host_cache'](service_instance=si)
|
||||
|
||||
# Check enabled
|
||||
if host_cache['enabled'] != enabled:
|
||||
changes.update({'enabled': {'old': host_cache['enabled'],
|
||||
'new': enabled}})
|
||||
needs_setting = True
|
||||
|
||||
# Check datastores
|
||||
existing_datastores = None
|
||||
if host_cache.get('datastore'):
|
||||
existing_datastores = \
|
||||
__salt__['vsphere.list_datastores_via_proxy'](
|
||||
datastore_names=[datastore['name']],
|
||||
service_instance=si)
|
||||
# Retrieve backing disks
|
||||
existing_disks = __salt__['vsphere.list_disks'](
|
||||
scsi_addresses=[datastore['backing_disk_scsi_addr']],
|
||||
service_instance=si)
|
||||
if not existing_disks:
|
||||
raise VMwareObjectRetrievalError(
|
||||
'Disk with scsi address \'{0}\' was not found in host \'{1}\''
|
||||
''.format(datastore['backing_disk_scsi_addr'], hostname))
|
||||
backing_disk = existing_disks[0]
|
||||
backing_disk_display = '{0} (id:{1})'.format(
|
||||
backing_disk['scsi_address'], backing_disk['id'])
|
||||
log.trace('backing_disk = {0}'.format(backing_disk_display))
|
||||
|
||||
existing_datastore = None
|
||||
if not existing_datastores:
|
||||
# Check if disk needs to be erased
|
||||
if erase_backing_disk:
|
||||
if __opts__['test']:
|
||||
comments.append('State {0} will erase '
|
||||
'the backing disk \'{1}\' on host \'{2}\'.'
|
||||
''.format(name, backing_disk_display,
|
||||
hostname))
|
||||
log.info(comments[-1])
|
||||
else:
|
||||
# Erase disk
|
||||
__salt__['vsphere.erase_disk_partitions'](
|
||||
disk_id=backing_disk['id'], service_instance=si)
|
||||
comments.append('Erased backing disk \'{0}\' on host '
|
||||
'\'{1}\'.'.format(backing_disk_display,
|
||||
hostname))
|
||||
log.info(comments[-1])
|
||||
# Create the datastore
|
||||
if __opts__['test']:
|
||||
comments.append('State {0} will create '
|
||||
'the datastore \'{1}\', with backing disk '
|
||||
'\'{2}\', on host \'{3}\'.'
|
||||
''.format(name, datastore['name'],
|
||||
backing_disk_display, hostname))
|
||||
log.info(comments[-1])
|
||||
else:
|
||||
if dedicated_backing_disk:
|
||||
# Check backing disk doesn't already have partitions
|
||||
partitions = __salt__['vsphere.list_disk_partitions'](
|
||||
disk_id=backing_disk['id'], service_instance=si)
|
||||
log.trace('partitions = {0}'.format(partitions))
|
||||
# We will ignore the mbr partitions
|
||||
non_mbr_partitions = [p for p in partitions
|
||||
if p['format'] != 'mbr']
|
||||
if len(non_mbr_partitions) > 0:
|
||||
raise VMwareApiError(
|
||||
'Backing disk \'{0}\' has unexpected partitions'
|
||||
''.format(backing_disk_display))
|
||||
__salt__['vsphere.create_vmfs_datastore'](
|
||||
datastore['name'], existing_disks[0]['id'],
|
||||
datastore['vmfs_version'], service_instance=si)
|
||||
comments.append('Created vmfs datastore \'{0}\', backed by '
|
||||
'disk \'{1}\', on host \'{2}\'.'
|
||||
''.format(datastore['name'],
|
||||
backing_disk_display, hostname))
|
||||
log.info(comments[-1])
|
||||
changes.update(
|
||||
{'datastore':
|
||||
{'new': {'name': datastore['name'],
|
||||
'backing_disk': backing_disk_display}}})
|
||||
existing_datastore = \
|
||||
__salt__['vsphere.list_datastores_via_proxy'](
|
||||
datastore_names=[datastore['name']],
|
||||
service_instance=si)[0]
|
||||
needs_setting = True
|
||||
else:
|
||||
# Check datastore is backed by the correct disk
|
||||
if not existing_datastores[0].get('backing_disk_ids'):
|
||||
raise VMwareSaltError('Datastore \'{0}\' doesn\'t have a '
|
||||
'backing disk'
|
||||
''.format(datastore['name']))
|
||||
if backing_disk['id'] not in \
|
||||
existing_datastores[0]['backing_disk_ids']:
|
||||
|
||||
raise VMwareSaltError(
|
||||
'Datastore \'{0}\' is not backed by the correct disk: '
|
||||
'expected \'{1}\'; got {2}'
|
||||
''.format(
|
||||
datastore['name'], backing_disk['id'],
|
||||
', '.join(
|
||||
['\'{0}\''.format(disk) for disk in
|
||||
existing_datastores[0]['backing_disk_ids']])))
|
||||
|
||||
comments.append('Datastore \'{0}\' already exists on host \'{1}\' '
|
||||
'and is backed by disk \'{2}\'. Nothing to be '
|
||||
'done.'.format(datastore['name'], hostname,
|
||||
backing_disk_display))
|
||||
existing_datastore = existing_datastores[0]
|
||||
log.trace('existing_datastore = {0}'.format(existing_datastore))
|
||||
log.info(comments[-1])
|
||||
|
||||
if existing_datastore:
|
||||
# The following comparisons can be done if the existing_datastore
|
||||
# is set; it may not be set if running in test mode
|
||||
#
|
||||
# We support percent, as well as MiB, we will convert the size
|
||||
# to MiB, multiples of 1024 (VMware SDK limitation)
|
||||
if swap_type == '%':
|
||||
# Percentage swap size
|
||||
# Convert from bytes to MiB
|
||||
raw_size_MiB = (swap_size_value/100.0) * \
|
||||
(existing_datastore['capacity']/1024/1024)
|
||||
else:
|
||||
raw_size_MiB = swap_size_value * 1024
|
||||
log.trace('raw_size = {0}MiB'.format(raw_size_MiB))
|
||||
swap_size_MiB = int(raw_size_MiB/1024)*1024
|
||||
log.trace('adjusted swap_size = {0}MiB'.format(swap_size_MiB))
|
||||
existing_swap_size_MiB = 0
|
||||
m = re.match(r'(\d+)MiB', host_cache.get('swap_size')) if \
|
||||
host_cache.get('swap_size') else None
|
||||
if m:
|
||||
# if swap_size from the host is set and has an expected value
|
||||
# we are going to parse it to get the number of MiBs
|
||||
existing_swap_size_MiB = int(m.group(1))
|
||||
if not existing_swap_size_MiB == swap_size_MiB:
|
||||
needs_setting = True
|
||||
changes.update(
|
||||
{'swap_size':
|
||||
{'old': '{}GiB'.format(existing_swap_size_MiB/1024),
|
||||
'new': '{}GiB'.format(swap_size_MiB/1024)}})
|
||||
|
||||
if needs_setting:
|
||||
if __opts__['test']:
|
||||
comments.append('State {0} will configure '
|
||||
'the host cache on host \'{1}\' to: {2}.'
|
||||
''.format(name, hostname,
|
||||
{'enabled': enabled,
|
||||
'datastore_name': datastore['name'],
|
||||
'swap_size': swap_size}))
|
||||
else:
|
||||
if (existing_datastore['capacity'] / 1024.0**2) < \
|
||||
swap_size_MiB:
|
||||
|
||||
raise ArgumentValueError(
|
||||
'Capacity of host cache datastore \'{0}\' ({1} MiB) is '
|
||||
'smaller than the required swap size ({2} MiB)'
|
||||
''.format(existing_datastore['name'],
|
||||
existing_datastore['capacity'] / 1024.0**2,
|
||||
swap_size_MiB))
|
||||
__salt__['vsphere.configure_host_cache'](
|
||||
enabled,
|
||||
datastore['name'],
|
||||
swap_size_MiB=swap_size_MiB,
|
||||
service_instance=si)
|
||||
comments.append('Host cache configured on host '
|
||||
'\'{0}\'.'.format(hostname))
|
||||
else:
|
||||
comments.append('Host cache on host \'{0}\' is already correctly '
|
||||
'configured. Nothing to be done.'.format(hostname))
|
||||
result = True
|
||||
__salt__['vsphere.disconnect'](si)
|
||||
log.info(comments[-1])
|
||||
ret.update({'comment': '\n'.join(comments),
|
||||
'result': result})
|
||||
if __opts__['test']:
|
||||
ret['pchanges'] = changes
|
||||
else:
|
||||
ret['changes'] = changes
|
||||
return ret
|
||||
except CommandExecutionError as err:
|
||||
log.error('Error: {0}.'.format(err))
|
||||
if si:
|
||||
__salt__['vsphere.disconnect'](si)
|
||||
ret.update({
|
||||
'result': False if not __opts__['test'] else None,
|
||||
'comment': '{}.'.format(err)})
|
||||
return ret
|
||||
|
||||
|
||||
def _lookup_syslog_config(config):
|
||||
'''
|
||||
Helper function that looks up syslog_config keys available from
|
||||
|
|
501
salt/states/pbm.py
Normal file
501
salt/states/pbm.py
Normal file
|
@ -0,0 +1,501 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Manages VMware storage policies
|
||||
(called pbm because the vCenter endpoint is /pbm)
|
||||
|
||||
Examples
|
||||
========
|
||||
|
||||
Storage policy
|
||||
--------------
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"name": "salt_storage_policy"
|
||||
"description": "Managed by Salt. Random capability values.",
|
||||
"resource_type": "STORAGE",
|
||||
"subprofiles": [
|
||||
{
|
||||
"capabilities": [
|
||||
{
|
||||
"setting": {
|
||||
"type": "scalar",
|
||||
"value": 2
|
||||
},
|
||||
"namespace": "VSAN",
|
||||
"id": "hostFailuresToTolerate"
|
||||
},
|
||||
{
|
||||
"setting": {
|
||||
"type": "scalar",
|
||||
"value": 2
|
||||
},
|
||||
"namespace": "VSAN",
|
||||
"id": "stripeWidth"
|
||||
},
|
||||
{
|
||||
"setting": {
|
||||
"type": "scalar",
|
||||
"value": true
|
||||
},
|
||||
"namespace": "VSAN",
|
||||
"id": "forceProvisioning"
|
||||
},
|
||||
{
|
||||
"setting": {
|
||||
"type": "scalar",
|
||||
"value": 50
|
||||
},
|
||||
"namespace": "VSAN",
|
||||
"id": "proportionalCapacity"
|
||||
},
|
||||
{
|
||||
"setting": {
|
||||
"type": "scalar",
|
||||
"value": 0
|
||||
},
|
||||
"namespace": "VSAN",
|
||||
"id": "cacheReservation"
|
||||
}
|
||||
],
|
||||
"name": "Rule-Set 1: VSAN",
|
||||
"force_provision": null
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
|
||||
- pyVmomi Python Module
|
||||
|
||||
|
||||
pyVmomi
|
||||
-------
|
||||
|
||||
PyVmomi can be installed via pip:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install pyVmomi
|
||||
|
||||
.. note::
|
||||
|
||||
Version 6.0 of pyVmomi has some problems with SSL error handling on certain
|
||||
versions of Python. If using version 6.0 of pyVmomi, Python 2.6,
|
||||
Python 2.7.9, or newer must be present. This is due to an upstream dependency
|
||||
in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the
|
||||
version of Python is not in the supported range, you will need to install an
|
||||
earlier version of pyVmomi. See `Issue #29537`_ for more information.
|
||||
|
||||
.. _Issue #29537: https://github.com/saltstack/salt/issues/29537
|
||||
'''
|
||||
|
||||
# Import Python Libs
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
import copy
|
||||
import sys
|
||||
|
||||
# Import Salt Libs
|
||||
from salt.exceptions import CommandExecutionError, ArgumentValueError
|
||||
from salt.utils.dictdiffer import recursive_diff
|
||||
from salt.utils.listdiffer import list_diff
|
||||
|
||||
# External libraries
|
||||
try:
|
||||
from pyVmomi import VmomiSupport
|
||||
HAS_PYVMOMI = True
|
||||
except ImportError:
|
||||
HAS_PYVMOMI = False
|
||||
|
||||
# Get Logging Started
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if not HAS_PYVMOMI:
|
||||
return False, 'State module did not load: pyVmomi not found'
|
||||
|
||||
# We check the supported vim versions to infer the pyVmomi version
|
||||
if 'vim25/6.0' in VmomiSupport.versionMap and \
|
||||
sys.version_info > (2, 7) and sys.version_info < (2, 7, 9):
|
||||
|
||||
return False, ('State module did not load: Incompatible versions '
|
||||
'of Python and pyVmomi present. See Issue #29537.')
|
||||
return True
|
||||
|
||||
|
||||
def mod_init(low):
|
||||
'''
|
||||
Init function
|
||||
'''
|
||||
return True
|
||||
|
||||
|
||||
def default_vsan_policy_configured(name, policy):
|
||||
'''
|
||||
Configures the default VSAN policy on a vCenter.
|
||||
The state assumes there is only one default VSAN policy on a vCenter.
|
||||
|
||||
policy
|
||||
Dict representation of a policy
|
||||
'''
|
||||
# TODO Refactor when recurse_differ supports list_differ
|
||||
# It's going to make the whole thing much easier
|
||||
policy_copy = copy.deepcopy(policy)
|
||||
proxy_type = __salt__['vsphere.get_proxy_type']()
|
||||
log.trace('proxy_type = {0}'.format(proxy_type))
|
||||
# All allowed proxies have a shim execution module with the same
|
||||
# name which implementes a get_details function
|
||||
# All allowed proxies have a vcenter detail
|
||||
vcenter = __salt__['{0}.get_details'.format(proxy_type)]()['vcenter']
|
||||
log.info('Running {0} on vCenter '
|
||||
'\'{1}\''.format(name, vcenter))
|
||||
log.trace('policy = {0}'.format(policy))
|
||||
changes_required = False
|
||||
ret = {'name': name, 'changes': {}, 'result': None, 'comment': None,
|
||||
'pchanges': {}}
|
||||
comments = []
|
||||
changes = {}
|
||||
changes_required = False
|
||||
si = None
|
||||
|
||||
try:
|
||||
#TODO policy schema validation
|
||||
si = __salt__['vsphere.get_service_instance_via_proxy']()
|
||||
current_policy = __salt__['vsphere.list_default_vsan_policy'](si)
|
||||
log.trace('current_policy = {0}'.format(current_policy))
|
||||
# Building all diffs between the current and expected policy
|
||||
# XXX We simplify the comparison by assuming we have at most 1
|
||||
# sub_profile
|
||||
if policy.get('subprofiles'):
|
||||
if len(policy['subprofiles']) > 1:
|
||||
raise ArgumentValueError('Multiple sub_profiles ({0}) are not '
|
||||
'supported in the input policy')
|
||||
subprofile = policy['subprofiles'][0]
|
||||
current_subprofile = current_policy['subprofiles'][0]
|
||||
capabilities_differ = list_diff(current_subprofile['capabilities'],
|
||||
subprofile.get('capabilities', []),
|
||||
key='id')
|
||||
del policy['subprofiles']
|
||||
if subprofile.get('capabilities'):
|
||||
del subprofile['capabilities']
|
||||
del current_subprofile['capabilities']
|
||||
# Get the subprofile diffs without the capability keys
|
||||
subprofile_differ = recursive_diff(current_subprofile,
|
||||
dict(subprofile))
|
||||
|
||||
del current_policy['subprofiles']
|
||||
policy_differ = recursive_diff(current_policy, policy)
|
||||
if policy_differ.diffs or capabilities_differ.diffs or \
|
||||
subprofile_differ.diffs:
|
||||
|
||||
if 'name' in policy_differ.new_values or \
|
||||
'description' in policy_differ.new_values:
|
||||
|
||||
raise ArgumentValueError(
|
||||
'\'name\' and \'description\' of the default VSAN policy '
|
||||
'cannot be updated')
|
||||
changes_required = True
|
||||
if __opts__['test']:
|
||||
str_changes = []
|
||||
if policy_differ.diffs:
|
||||
str_changes.extend([change for change in
|
||||
policy_differ.changes_str.split('\n')])
|
||||
if subprofile_differ.diffs or capabilities_differ.diffs:
|
||||
str_changes.append('subprofiles:')
|
||||
if subprofile_differ.diffs:
|
||||
str_changes.extend(
|
||||
[' {0}'.format(change) for change in
|
||||
subprofile_differ.changes_str.split('\n')])
|
||||
if capabilities_differ.diffs:
|
||||
str_changes.append(' capabilities:')
|
||||
str_changes.extend(
|
||||
[' {0}'.format(change) for change in
|
||||
capabilities_differ.changes_str2.split('\n')])
|
||||
comments.append(
|
||||
'State {0} will update the default VSAN policy on '
|
||||
'vCenter \'{1}\':\n{2}'
|
||||
''.format(name, vcenter, '\n'.join(str_changes)))
|
||||
else:
|
||||
__salt__['vsphere.update_storage_policy'](
|
||||
policy=current_policy['name'],
|
||||
policy_dict=policy_copy,
|
||||
service_instance=si)
|
||||
comments.append('Updated the default VSAN policy in vCenter '
|
||||
'\'{0}\''.format(vcenter))
|
||||
log.info(comments[-1])
|
||||
|
||||
new_values = policy_differ.new_values
|
||||
new_values['subprofiles'] = [subprofile_differ.new_values]
|
||||
new_values['subprofiles'][0]['capabilities'] = \
|
||||
capabilities_differ.new_values
|
||||
if not new_values['subprofiles'][0]['capabilities']:
|
||||
del new_values['subprofiles'][0]['capabilities']
|
||||
if not new_values['subprofiles'][0]:
|
||||
del new_values['subprofiles']
|
||||
old_values = policy_differ.old_values
|
||||
old_values['subprofiles'] = [subprofile_differ.old_values]
|
||||
old_values['subprofiles'][0]['capabilities'] = \
|
||||
capabilities_differ.old_values
|
||||
if not old_values['subprofiles'][0]['capabilities']:
|
||||
del old_values['subprofiles'][0]['capabilities']
|
||||
if not old_values['subprofiles'][0]:
|
||||
del old_values['subprofiles']
|
||||
changes.update({'default_vsan_policy':
|
||||
{'new': new_values,
|
||||
'old': old_values}})
|
||||
log.trace(changes)
|
||||
__salt__['vsphere.disconnect'](si)
|
||||
except CommandExecutionError as exc:
|
||||
log.error('Error: {}'.format(exc))
|
||||
if si:
|
||||
__salt__['vsphere.disconnect'](si)
|
||||
if not __opts__['test']:
|
||||
ret['result'] = False
|
||||
ret.update({'comment': exc.strerror,
|
||||
'result': False if not __opts__['test'] else None})
|
||||
return ret
|
||||
if not changes_required:
|
||||
# We have no changes
|
||||
ret.update({'comment': ('Default VSAN policy in vCenter '
|
||||
'\'{0}\' is correctly configured. '
|
||||
'Nothing to be done.'.format(vcenter)),
|
||||
'result': True})
|
||||
else:
|
||||
ret.update({'comment': '\n'.join(comments)})
|
||||
if __opts__['test']:
|
||||
ret.update({'pchanges': changes,
|
||||
'result': None})
|
||||
else:
|
||||
ret.update({'changes': changes,
|
||||
'result': True})
|
||||
return ret
|
||||
|
||||
|
||||
def storage_policies_configured(name, policies):
|
||||
'''
|
||||
Configures storage policies on a vCenter.
|
||||
|
||||
policies
|
||||
List of dict representation of the required storage policies
|
||||
'''
|
||||
comments = []
|
||||
changes = []
|
||||
changes_required = False
|
||||
ret = {'name': name, 'changes': {}, 'result': None, 'comment': None,
|
||||
'pchanges': {}}
|
||||
log.trace('policies = {0}'.format(policies))
|
||||
si = None
|
||||
try:
|
||||
proxy_type = __salt__['vsphere.get_proxy_type']()
|
||||
log.trace('proxy_type = {0}'.format(proxy_type))
|
||||
# All allowed proxies have a shim execution module with the same
|
||||
# name which implementes a get_details function
|
||||
# All allowed proxies have a vcenter detail
|
||||
vcenter = __salt__['{0}.get_details'.format(proxy_type)]()['vcenter']
|
||||
log.info('Running state \'{0}\' on vCenter '
|
||||
'\'{1}\''.format(name, vcenter))
|
||||
si = __salt__['vsphere.get_service_instance_via_proxy']()
|
||||
current_policies = __salt__['vsphere.list_storage_policies'](
|
||||
policy_names=[policy['name'] for policy in policies],
|
||||
service_instance=si)
|
||||
log.trace('current_policies = {0}'.format(current_policies))
|
||||
# TODO Refactor when recurse_differ supports list_differ
|
||||
# It's going to make the whole thing much easier
|
||||
for policy in policies:
|
||||
policy_copy = copy.deepcopy(policy)
|
||||
filtered_policies = [p for p in current_policies
|
||||
if p['name'] == policy['name']]
|
||||
current_policy = filtered_policies[0] \
|
||||
if filtered_policies else None
|
||||
|
||||
if not current_policy:
|
||||
changes_required = True
|
||||
if __opts__['test']:
|
||||
comments.append('State {0} will create the storage policy '
|
||||
'\'{1}\' on vCenter \'{2}\''
|
||||
''.format(name, policy['name'], vcenter))
|
||||
else:
|
||||
__salt__['vsphere.create_storage_policy'](
|
||||
policy['name'], policy, service_instance=si)
|
||||
comments.append('Created storage policy \'{0}\' on '
|
||||
'vCenter \'{1}\''.format(policy['name'],
|
||||
vcenter))
|
||||
changes.append({'new': policy, 'old': None})
|
||||
log.trace(comments[-1])
|
||||
# Continue with next
|
||||
continue
|
||||
|
||||
# Building all diffs between the current and expected policy
|
||||
# XXX We simplify the comparison by assuming we have at most 1
|
||||
# sub_profile
|
||||
if policy.get('subprofiles'):
|
||||
if len(policy['subprofiles']) > 1:
|
||||
raise ArgumentValueError('Multiple sub_profiles ({0}) are not '
|
||||
'supported in the input policy')
|
||||
subprofile = policy['subprofiles'][0]
|
||||
current_subprofile = current_policy['subprofiles'][0]
|
||||
capabilities_differ = list_diff(current_subprofile['capabilities'],
|
||||
subprofile.get('capabilities', []),
|
||||
key='id')
|
||||
del policy['subprofiles']
|
||||
if subprofile.get('capabilities'):
|
||||
del subprofile['capabilities']
|
||||
del current_subprofile['capabilities']
|
||||
# Get the subprofile diffs without the capability keys
|
||||
subprofile_differ = recursive_diff(current_subprofile,
|
||||
dict(subprofile))
|
||||
|
||||
del current_policy['subprofiles']
|
||||
policy_differ = recursive_diff(current_policy, policy)
|
||||
if policy_differ.diffs or capabilities_differ.diffs or \
|
||||
subprofile_differ.diffs:
|
||||
|
||||
changes_required = True
|
||||
if __opts__['test']:
|
||||
str_changes = []
|
||||
if policy_differ.diffs:
|
||||
str_changes.extend(
|
||||
[change for change in
|
||||
policy_differ.changes_str.split('\n')])
|
||||
if subprofile_differ.diffs or \
|
||||
capabilities_differ.diffs:
|
||||
|
||||
str_changes.append('subprofiles:')
|
||||
if subprofile_differ.diffs:
|
||||
str_changes.extend(
|
||||
[' {0}'.format(change) for change in
|
||||
subprofile_differ.changes_str.split('\n')])
|
||||
if capabilities_differ.diffs:
|
||||
str_changes.append(' capabilities:')
|
||||
str_changes.extend(
|
||||
[' {0}'.format(change) for change in
|
||||
capabilities_differ.changes_str2.split('\n')])
|
||||
comments.append(
|
||||
'State {0} will update the storage policy \'{1}\''
|
||||
' on vCenter \'{2}\':\n{3}'
|
||||
''.format(name, policy['name'], vcenter,
|
||||
'\n'.join(str_changes)))
|
||||
else:
|
||||
__salt__['vsphere.update_storage_policy'](
|
||||
policy=current_policy['name'],
|
||||
policy_dict=policy_copy,
|
||||
service_instance=si)
|
||||
comments.append('Updated the storage policy \'{0}\''
|
||||
'in vCenter \'{1}\''
|
||||
''.format(policy['name'], vcenter))
|
||||
log.info(comments[-1])
|
||||
|
||||
# Build new/old values to report what was changed
|
||||
new_values = policy_differ.new_values
|
||||
new_values['subprofiles'] = [subprofile_differ.new_values]
|
||||
new_values['subprofiles'][0]['capabilities'] = \
|
||||
capabilities_differ.new_values
|
||||
if not new_values['subprofiles'][0]['capabilities']:
|
||||
del new_values['subprofiles'][0]['capabilities']
|
||||
if not new_values['subprofiles'][0]:
|
||||
del new_values['subprofiles']
|
||||
old_values = policy_differ.old_values
|
||||
old_values['subprofiles'] = [subprofile_differ.old_values]
|
||||
old_values['subprofiles'][0]['capabilities'] = \
|
||||
capabilities_differ.old_values
|
||||
if not old_values['subprofiles'][0]['capabilities']:
|
||||
del old_values['subprofiles'][0]['capabilities']
|
||||
if not old_values['subprofiles'][0]:
|
||||
del old_values['subprofiles']
|
||||
changes.append({'new': new_values,
|
||||
'old': old_values})
|
||||
else:
|
||||
# No diffs found - no updates required
|
||||
comments.append('Storage policy \'{0}\' is up to date. '
|
||||
'Nothing to be done.'.format(policy['name']))
|
||||
__salt__['vsphere.disconnect'](si)
|
||||
except CommandExecutionError as exc:
|
||||
log.error('Error: {0}'.format(exc))
|
||||
if si:
|
||||
__salt__['vsphere.disconnect'](si)
|
||||
if not __opts__['test']:
|
||||
ret['result'] = False
|
||||
ret.update({'comment': exc.strerror,
|
||||
'result': False if not __opts__['test'] else None})
|
||||
return ret
|
||||
if not changes_required:
|
||||
# We have no changes
|
||||
ret.update({'comment': ('All storage policy in vCenter '
|
||||
'\'{0}\' is correctly configured. '
|
||||
'Nothing to be done.'.format(vcenter)),
|
||||
'result': True})
|
||||
else:
|
||||
ret.update({'comment': '\n'.join(comments)})
|
||||
if __opts__['test']:
|
||||
ret.update({'pchanges': {'storage_policies': changes},
|
||||
'result': None})
|
||||
else:
|
||||
ret.update({'changes': {'storage_policies': changes},
|
||||
'result': True})
|
||||
return ret
|
||||
|
||||
|
||||
def default_storage_policy_assigned(name, policy, datastore):
|
||||
'''
|
||||
Assigns a default storage policy to a datastore
|
||||
|
||||
policy
|
||||
Name of storage policy
|
||||
|
||||
datastore
|
||||
Name of datastore
|
||||
'''
|
||||
log.info('Running state {0} for policy \'{1}\', datastore \'{2}\'.'
|
||||
''.format(name, policy, datastore))
|
||||
changes = {}
|
||||
changes_required = False
|
||||
ret = {'name': name, 'changes': {}, 'result': None, 'comment': None,
|
||||
'pchanges': {}}
|
||||
si = None
|
||||
try:
|
||||
si = __salt__['vsphere.get_service_instance_via_proxy']()
|
||||
existing_policy = \
|
||||
__salt__['vsphere.list_default_storage_policy_of_datastore'](
|
||||
datastore=datastore, service_instance=si)
|
||||
if existing_policy['name'] == policy:
|
||||
comment = ('Storage policy \'{0}\' is already assigned to '
|
||||
'datastore \'{1}\'. Nothing to be done.'
|
||||
''.format(policy, datastore))
|
||||
else:
|
||||
changes_required = True
|
||||
changes = {
|
||||
'default_storage_policy': {'old': existing_policy['name'],
|
||||
'new': policy}}
|
||||
if __opts__['test']:
|
||||
comment = ('State {0} will assign storage policy \'{1}\' to '
|
||||
'datastore \'{2}\'.').format(name, policy,
|
||||
datastore)
|
||||
else:
|
||||
__salt__['vsphere.assign_default_storage_policy_to_datastore'](
|
||||
policy=policy, datastore=datastore, service_instance=si)
|
||||
comment = ('Storage policy \'{0} was assigned to datastore '
|
||||
'\'{1}\'.').format(policy, name)
|
||||
log.info(comment)
|
||||
except CommandExecutionError as exc:
|
||||
log.error('Error: {}'.format(exc))
|
||||
if si:
|
||||
__salt__['vsphere.disconnect'](si)
|
||||
ret.update({'comment': exc.strerror,
|
||||
'result': False if not __opts__['test'] else None})
|
||||
return ret
|
||||
ret['comment'] = comment
|
||||
if changes_required:
|
||||
if __opts__['test']:
|
||||
ret.update({'result': None,
|
||||
'pchanges': changes})
|
||||
else:
|
||||
ret.update({'result': True,
|
||||
'changes': changes})
|
||||
else:
|
||||
ret['result'] = True
|
||||
return ret
|
|
@ -36,6 +36,7 @@ def __virtual__():
|
|||
def present(name,
|
||||
pattern,
|
||||
definition,
|
||||
apply_to=None,
|
||||
priority=0,
|
||||
vhost='/',
|
||||
runas=None):
|
||||
|
@ -52,6 +53,8 @@ def present(name,
|
|||
A json dict describing the policy
|
||||
priority
|
||||
Priority (defaults to 0)
|
||||
apply_to
|
||||
Apply policy to 'queues', 'exchanges' or 'all' (defailt to 'all')
|
||||
vhost
|
||||
Virtual host to apply to (defaults to '/')
|
||||
runas
|
||||
|
@ -68,6 +71,8 @@ def present(name,
|
|||
updates.append('Pattern')
|
||||
if policy.get('definition') != definition:
|
||||
updates.append('Definition')
|
||||
if apply_to and (policy.get('apply-to') != apply_to):
|
||||
updates.append('Applyto')
|
||||
if int(policy.get('priority')) != priority:
|
||||
updates.append('Priority')
|
||||
|
||||
|
@ -85,6 +90,7 @@ def present(name,
|
|||
name,
|
||||
pattern,
|
||||
definition,
|
||||
apply_to,
|
||||
priority=priority,
|
||||
runas=runas)
|
||||
elif updates:
|
||||
|
@ -97,6 +103,7 @@ def present(name,
|
|||
name,
|
||||
pattern,
|
||||
definition,
|
||||
apply_to,
|
||||
priority=priority,
|
||||
runas=runas)
|
||||
|
||||
|
|
|
@ -26,10 +26,15 @@ from __future__ import absolute_import
|
|||
# Import python libs
|
||||
import fnmatch
|
||||
import logging
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
|
||||
# Import salt libs
|
||||
import salt.syspaths
|
||||
import salt.exceptions
|
||||
import salt.output
|
||||
import salt.utils
|
||||
import salt.utils.event
|
||||
import salt.utils.versions
|
||||
from salt.ext import six
|
||||
|
@ -59,6 +64,48 @@ def _fire_args(tag_data):
|
|||
)
|
||||
|
||||
|
||||
def _parallel_map(func, inputs):
|
||||
'''
|
||||
Applies a function to each element of a list, returning the resulting list.
|
||||
|
||||
A separate thread is created for each element in the input list and the
|
||||
passed function is called for each of the elements. When all threads have
|
||||
finished execution a list with the results corresponding to the inputs is
|
||||
returned.
|
||||
|
||||
If one of the threads fails (because the function throws an exception),
|
||||
that exception is reraised. If more than one thread fails, the exception
|
||||
from the first thread (according to the index of the input element) is
|
||||
reraised.
|
||||
|
||||
func:
|
||||
function that is applied on each input element.
|
||||
inputs:
|
||||
list of elements that shall be processed. The length of this list also
|
||||
defines the number of threads created.
|
||||
'''
|
||||
outputs = len(inputs) * [None]
|
||||
errors = len(inputs) * [None]
|
||||
|
||||
def create_thread(index):
|
||||
def run_thread():
|
||||
try:
|
||||
outputs[index] = func(inputs[index])
|
||||
except: # pylint: disable=bare-except
|
||||
errors[index] = sys.exc_info()
|
||||
thread = threading.Thread(target=run_thread)
|
||||
thread.start()
|
||||
return thread
|
||||
threads = list(six.moves.map(create_thread, six.moves.range(len(inputs))))
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
for error in errors:
|
||||
if error is not None:
|
||||
exc_type, exc_value, exc_traceback = error
|
||||
six.reraise(exc_type, exc_value, exc_traceback)
|
||||
return outputs
|
||||
|
||||
|
||||
def state(name,
|
||||
tgt,
|
||||
ssh=False,
|
||||
|
@ -770,6 +817,190 @@ def runner(name, **kwargs):
|
|||
return ret
|
||||
|
||||
|
||||
def parallel_runners(name, runners):
|
||||
'''
|
||||
Executes multiple runner modules on the master in parallel.
|
||||
|
||||
.. versionadded:: 2017.x.0 (Nitrogen)
|
||||
|
||||
A separate thread is spawned for each runner. This state is intended to be
|
||||
used with the orchestrate runner in place of the ``saltmod.runner`` state
|
||||
when different tasks should be run in parallel. In general, Salt states are
|
||||
not safe when used concurrently, so ensure that they are used in a safe way
|
||||
(e.g. by only targeting separate minions in parallel tasks).
|
||||
|
||||
name:
|
||||
name identifying this state. The name is provided as part of the
|
||||
output, but not used for anything else.
|
||||
|
||||
runners:
|
||||
list of runners that should be run in parallel. Each element of the
|
||||
list has to be a dictionary. This dictionary's name entry stores the
|
||||
name of the runner function that shall be invoked. The optional kwarg
|
||||
entry stores a dictionary of named arguments that are passed to the
|
||||
runner function.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
parallel-state:
|
||||
salt.parallel_runners:
|
||||
- runners:
|
||||
my_runner_1:
|
||||
- name: state.orchestrate
|
||||
- kwarg:
|
||||
mods: orchestrate_state_1
|
||||
my_runner_2:
|
||||
- name: state.orchestrate
|
||||
- kwarg:
|
||||
mods: orchestrate_state_2
|
||||
'''
|
||||
# For the sake of consistency, we treat a single string in the same way as
|
||||
# a key without a value. This allows something like
|
||||
# salt.parallel_runners:
|
||||
# - runners:
|
||||
# state.orchestrate
|
||||
# Obviously, this will only work if the specified runner does not need any
|
||||
# arguments.
|
||||
if isinstance(runners, six.string_types):
|
||||
runners = {runners: [{name: runners}]}
|
||||
# If the runners argument is not a string, it must be a dict. Everything
|
||||
# else is considered an error.
|
||||
if not isinstance(runners, dict):
|
||||
return {
|
||||
'name': name,
|
||||
'result': False,
|
||||
'changes': {},
|
||||
'comment': 'The runners parameter must be a string or dict.'
|
||||
}
|
||||
# The configuration for each runner is given as a list of key-value pairs.
|
||||
# This is not very useful for what we want to do, but it is the typical
|
||||
# style used in Salt. For further processing, we convert each of these
|
||||
# lists to a dict. This also makes it easier to check whether a name has
|
||||
# been specified explicitly.
|
||||
for runner_id, runner_config in six.iteritems(runners):
|
||||
if runner_config is None:
|
||||
runner_config = {}
|
||||
else:
|
||||
runner_config = salt.utils.repack_dictlist(runner_config)
|
||||
if 'name' not in runner_config:
|
||||
runner_config['name'] = runner_id
|
||||
runners[runner_id] = runner_config
|
||||
|
||||
try:
|
||||
jid = __orchestration_jid__
|
||||
except NameError:
|
||||
log.debug(
|
||||
'Unable to fire args event due to missing __orchestration_jid__')
|
||||
jid = None
|
||||
|
||||
def call_runner(runner_config):
|
||||
return __salt__['saltutil.runner'](runner_config['name'],
|
||||
__orchestration_jid__=jid,
|
||||
__env__=__env__,
|
||||
full_return=True,
|
||||
**(runner_config.get('kwarg', {})))
|
||||
|
||||
try:
|
||||
outputs = _parallel_map(call_runner, list(six.itervalues(runners)))
|
||||
except salt.exceptions.SaltException as exc:
|
||||
return {
|
||||
'name': name,
|
||||
'result': False,
|
||||
'success': False,
|
||||
'changes': {},
|
||||
'comment': 'One of the runners raised an exception: {0}'.format(
|
||||
exc)
|
||||
}
|
||||
# We bundle the results of the runners with the IDs of the runners so that
|
||||
# we can easily identify which output belongs to which runner. At the same
|
||||
# time we exctract the actual return value of the runner (saltutil.runner
|
||||
# adds some extra information that is not interesting to us).
|
||||
outputs = {
|
||||
runner_id: out['return']for runner_id, out in
|
||||
six.moves.zip(six.iterkeys(runners), outputs)
|
||||
}
|
||||
|
||||
# If each of the runners returned its output in the format compatible with
|
||||
# the 'highstate' outputter, we can leverage this fact when merging the
|
||||
# outputs.
|
||||
highstate_output = all(
|
||||
[out.get('outputter', '') == 'highstate' and 'data' in out for out in
|
||||
six.itervalues(outputs)]
|
||||
)
|
||||
|
||||
# The following helper function is used to extract changes from highstate
|
||||
# output.
|
||||
|
||||
def extract_changes(obj):
|
||||
if not isinstance(obj, dict):
|
||||
return {}
|
||||
elif 'changes' in obj:
|
||||
if (isinstance(obj['changes'], dict)
|
||||
and obj['changes'].get('out', '') == 'highstate'
|
||||
and 'ret' in obj['changes']):
|
||||
return obj['changes']['ret']
|
||||
else:
|
||||
return obj['changes']
|
||||
else:
|
||||
found_changes = {}
|
||||
for key, value in six.iteritems(obj):
|
||||
change = extract_changes(value)
|
||||
if change:
|
||||
found_changes[key] = change
|
||||
return found_changes
|
||||
if highstate_output:
|
||||
failed_runners = [runner_id for runner_id, out in
|
||||
six.iteritems(outputs) if
|
||||
out['data'].get('retcode', 0) != 0]
|
||||
all_successful = not failed_runners
|
||||
if all_successful:
|
||||
comment = 'All runner functions executed successfully.'
|
||||
else:
|
||||
runner_comments = [
|
||||
'Runner {0} failed with return value:\n{1}'.format(
|
||||
runner_id,
|
||||
salt.output.out_format(outputs[runner_id],
|
||||
'nested',
|
||||
__opts__,
|
||||
nested_indent=2)
|
||||
) for runner_id in failed_runners
|
||||
]
|
||||
comment = '\n'.join(runner_comments)
|
||||
changes = {}
|
||||
for runner_id, out in six.iteritems(outputs):
|
||||
runner_changes = extract_changes(out['data'])
|
||||
if runner_changes:
|
||||
changes[runner_id] = runner_changes
|
||||
else:
|
||||
failed_runners = [runner_id for runner_id, out in
|
||||
six.iteritems(outputs) if
|
||||
out.get('exit_code', 0) != 0]
|
||||
all_successful = not failed_runners
|
||||
if all_successful:
|
||||
comment = 'All runner functions executed successfully.'
|
||||
else:
|
||||
if len(failed_runners) == 1:
|
||||
comment = 'Runner {0} failed.'.format(failed_runners[0])
|
||||
else:
|
||||
comment =\
|
||||
'Runners {0} failed.'.format(', '.join(failed_runners))
|
||||
changes = {'ret': {
|
||||
runner_id: out for runner_id, out in six.iteritems(outputs)
|
||||
}}
|
||||
ret = {
|
||||
'name': name,
|
||||
'result': all_successful,
|
||||
'changes': changes,
|
||||
'comment': comment
|
||||
}
|
||||
|
||||
# The 'runner' function includes out['jid'] as '__jid__' in the returned
|
||||
# dict, but we cannot do this here because we have more than one JID if
|
||||
# we have more than one runner.
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def wheel(name, **kwargs):
|
||||
'''
|
||||
Execute a wheel module on the master
|
||||
|
|
69
salt/tops/saltclass.py
Normal file
69
salt/tops/saltclass.py
Normal file
|
@ -0,0 +1,69 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
SaltClass master_tops Module
|
||||
|
||||
.. code-block:: yaml
|
||||
master_tops:
|
||||
saltclass:
|
||||
path: /srv/saltclass
|
||||
'''
|
||||
|
||||
# import python libs
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
|
||||
import salt.utils.saltclass as sc
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only run if properly configured
|
||||
'''
|
||||
if __opts__['master_tops'].get('saltclass'):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def top(**kwargs):
|
||||
'''
|
||||
Node definitions path will be retrieved from __opts__ - or set to default -
|
||||
then added to 'salt_data' dict that is passed to the 'get_tops' function.
|
||||
'salt_data' dict is a convenient way to pass all the required datas to the function
|
||||
It contains:
|
||||
- __opts__
|
||||
- empty __salt__
|
||||
- __grains__
|
||||
- empty __pillar__
|
||||
- minion_id
|
||||
- path
|
||||
|
||||
If successfull the function will return a top dict for minion_id
|
||||
'''
|
||||
# If path has not been set, make a default
|
||||
_opts = __opts__['master_tops']['saltclass']
|
||||
if 'path' not in _opts:
|
||||
path = '/srv/saltclass'
|
||||
log.warning('path variable unset, using default: {0}'.format(path))
|
||||
else:
|
||||
path = _opts['path']
|
||||
|
||||
# Create a dict that will contain our salt objects
|
||||
# to send to get_tops function
|
||||
if 'id' not in kwargs['opts']:
|
||||
log.warning('Minion id not found - Returning empty dict')
|
||||
return {}
|
||||
else:
|
||||
minion_id = kwargs['opts']['id']
|
||||
|
||||
salt_data = {
|
||||
'__opts__': kwargs['opts'],
|
||||
'__salt__': {},
|
||||
'__grains__': kwargs['grains'],
|
||||
'__pillar__': {},
|
||||
'minion_id': minion_id,
|
||||
'path': path
|
||||
}
|
||||
|
||||
return sc.get_tops(minion_id, salt_data)
|
329
salt/utils/pbm.py
Normal file
329
salt/utils/pbm.py
Normal file
|
@ -0,0 +1,329 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Library for VMware Storage Policy management (via the pbm endpoint)
|
||||
|
||||
This library is used to manage the various policies available in VMware
|
||||
|
||||
:codeauthor: Alexandru Bleotu <alexandru.bleotu@morganstaley.com>
|
||||
|
||||
Dependencies
|
||||
~~~~~~~~~~~~
|
||||
|
||||
- pyVmomi Python Module
|
||||
|
||||
pyVmomi
|
||||
-------
|
||||
|
||||
PyVmomi can be installed via pip:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install pyVmomi
|
||||
|
||||
.. note::
|
||||
|
||||
versions of Python. If using version 6.0 of pyVmomi, Python 2.6,
|
||||
Python 2.7.9, or newer must be present. This is due to an upstream dependency
|
||||
in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the
|
||||
version of Python is not in the supported range, you will need to install an
|
||||
earlier version of pyVmomi. See `Issue #29537`_ for more information.
|
||||
|
||||
.. _Issue #29537: https://github.com/saltstack/salt/issues/29537
|
||||
|
||||
Based on the note above, to install an earlier version of pyVmomi than the
|
||||
version currently listed in PyPi, run the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install pyVmomi==5.5.0.2014.1.1
|
||||
'''
|
||||
|
||||
# Import Python Libs
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
|
||||
# Import Salt Libs
|
||||
import salt.utils.vmware
|
||||
from salt.exceptions import VMwareApiError, VMwareRuntimeError, \
|
||||
VMwareObjectRetrievalError
|
||||
|
||||
|
||||
try:
|
||||
from pyVmomi import pbm, vim, vmodl
|
||||
HAS_PYVMOMI = True
|
||||
except ImportError:
|
||||
HAS_PYVMOMI = False
|
||||
|
||||
|
||||
# Get Logging Started
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load if PyVmomi is installed.
|
||||
'''
|
||||
if HAS_PYVMOMI:
|
||||
return True
|
||||
else:
|
||||
return False, 'Missing dependency: The salt.utils.pbm module ' \
|
||||
'requires the pyvmomi library'
|
||||
|
||||
|
||||
def get_profile_manager(service_instance):
|
||||
'''
|
||||
Returns a profile manager
|
||||
|
||||
service_instance
|
||||
Service instance to the host or vCenter
|
||||
'''
|
||||
stub = salt.utils.vmware.get_new_service_instance_stub(
|
||||
service_instance, ns='pbm/2.0', path='/pbm/sdk')
|
||||
pbm_si = pbm.ServiceInstance('ServiceInstance', stub)
|
||||
try:
|
||||
profile_manager = pbm_si.RetrieveContent().profileManager
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
return profile_manager
|
||||
|
||||
|
||||
def get_placement_solver(service_instance):
|
||||
'''
|
||||
Returns a placement solver
|
||||
|
||||
service_instance
|
||||
Service instance to the host or vCenter
|
||||
'''
|
||||
stub = salt.utils.vmware.get_new_service_instance_stub(
|
||||
service_instance, ns='pbm/2.0', path='/pbm/sdk')
|
||||
pbm_si = pbm.ServiceInstance('ServiceInstance', stub)
|
||||
try:
|
||||
profile_manager = pbm_si.RetrieveContent().placementSolver
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
return profile_manager
|
||||
|
||||
|
||||
def get_capability_definitions(profile_manager):
|
||||
'''
|
||||
Returns a list of all capability definitions.
|
||||
|
||||
profile_manager
|
||||
Reference to the profile manager.
|
||||
'''
|
||||
res_type = pbm.profile.ResourceType(
|
||||
resourceType=pbm.profile.ResourceTypeEnum.STORAGE)
|
||||
try:
|
||||
cap_categories = profile_manager.FetchCapabilityMetadata(res_type)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
cap_definitions = []
|
||||
for cat in cap_categories:
|
||||
cap_definitions.extend(cat.capabilityMetadata)
|
||||
return cap_definitions
|
||||
|
||||
|
||||
def get_policies_by_id(profile_manager, policy_ids):
|
||||
'''
|
||||
Returns a list of policies with the specified ids.
|
||||
|
||||
profile_manager
|
||||
Reference to the profile manager.
|
||||
|
||||
policy_ids
|
||||
List of policy ids to retrieve.
|
||||
'''
|
||||
try:
|
||||
return profile_manager.RetrieveContent(policy_ids)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
|
||||
|
||||
def get_storage_policies(profile_manager, policy_names=None,
|
||||
get_all_policies=False):
|
||||
'''
|
||||
Returns a list of the storage policies, filtered by name.
|
||||
|
||||
profile_manager
|
||||
Reference to the profile manager.
|
||||
|
||||
policy_names
|
||||
List of policy names to filter by.
|
||||
Default is None.
|
||||
|
||||
get_all_policies
|
||||
Flag specifying to return all policies, regardless of the specified
|
||||
filter.
|
||||
'''
|
||||
res_type = pbm.profile.ResourceType(
|
||||
resourceType=pbm.profile.ResourceTypeEnum.STORAGE)
|
||||
try:
|
||||
policy_ids = profile_manager.QueryProfile(res_type)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
log.trace('policy_ids = {0}'.format(policy_ids))
|
||||
# More policies are returned so we need to filter again
|
||||
policies = [p for p in get_policies_by_id(profile_manager, policy_ids)
|
||||
if p.resourceType.resourceType ==
|
||||
pbm.profile.ResourceTypeEnum.STORAGE]
|
||||
if get_all_policies:
|
||||
return policies
|
||||
if not policy_names:
|
||||
policy_names = []
|
||||
return [p for p in policies if p.name in policy_names]
|
||||
|
||||
|
||||
def create_storage_policy(profile_manager, policy_spec):
|
||||
'''
|
||||
Creates a storage policy.
|
||||
|
||||
profile_manager
|
||||
Reference to the profile manager.
|
||||
|
||||
policy_spec
|
||||
Policy update spec.
|
||||
'''
|
||||
try:
|
||||
profile_manager.Create(policy_spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
|
||||
|
||||
def update_storage_policy(profile_manager, policy, policy_spec):
|
||||
'''
|
||||
Updates a storage policy.
|
||||
|
||||
profile_manager
|
||||
Reference to the profile manager.
|
||||
|
||||
policy
|
||||
Reference to the policy to be updated.
|
||||
|
||||
policy_spec
|
||||
Policy update spec.
|
||||
'''
|
||||
try:
|
||||
profile_manager.Update(policy.profileId, policy_spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
|
||||
|
||||
def get_default_storage_policy_of_datastore(profile_manager, datastore):
|
||||
'''
|
||||
Returns the default storage policy reference assigned to a datastore.
|
||||
|
||||
profile_manager
|
||||
Reference to the profile manager.
|
||||
|
||||
datastore
|
||||
Reference to the datastore.
|
||||
'''
|
||||
# Retrieve all datastores visible
|
||||
hub = pbm.placement.PlacementHub(
|
||||
hubId=datastore._moId, hubType='Datastore')
|
||||
log.trace('placement_hub = {0}'.format(hub))
|
||||
try:
|
||||
policy_id = profile_manager.QueryDefaultRequirementProfile(hub)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
policy_refs = get_policies_by_id(profile_manager, [policy_id])
|
||||
if not policy_refs:
|
||||
raise VMwareObjectRetrievalError('Storage policy with id \'{0}\' was '
|
||||
'not found'.format(policy_id))
|
||||
return policy_refs[0]
|
||||
|
||||
|
||||
def assign_default_storage_policy_to_datastore(profile_manager, policy,
|
||||
datastore):
|
||||
'''
|
||||
Assigns a storage policy as the default policy to a datastore.
|
||||
|
||||
profile_manager
|
||||
Reference to the profile manager.
|
||||
|
||||
policy
|
||||
Reference to the policy to assigned.
|
||||
|
||||
datastore
|
||||
Reference to the datastore.
|
||||
'''
|
||||
placement_hub = pbm.placement.PlacementHub(
|
||||
hubId=datastore._moId, hubType='Datastore')
|
||||
log.trace('placement_hub = {0}'.format(placement_hub))
|
||||
try:
|
||||
profile_manager.AssignDefaultRequirementProfile(policy.profileId,
|
||||
[placement_hub])
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
296
salt/utils/saltclass.py
Normal file
296
salt/utils/saltclass.py
Normal file
|
@ -0,0 +1,296 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
import re
|
||||
import logging
|
||||
from salt.ext.six import iteritems
|
||||
import yaml
|
||||
from jinja2 import FileSystemLoader, Environment
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# Renders jinja from a template file
|
||||
def render_jinja(_file, salt_data):
|
||||
j_env = Environment(loader=FileSystemLoader(os.path.dirname(_file)))
|
||||
j_env.globals.update({
|
||||
'__opts__': salt_data['__opts__'],
|
||||
'__salt__': salt_data['__salt__'],
|
||||
'__grains__': salt_data['__grains__'],
|
||||
'__pillar__': salt_data['__pillar__'],
|
||||
'minion_id': salt_data['minion_id'],
|
||||
})
|
||||
j_render = j_env.get_template(os.path.basename(_file)).render()
|
||||
return j_render
|
||||
|
||||
|
||||
# Renders yaml from rendered jinja
|
||||
def render_yaml(_file, salt_data):
|
||||
return yaml.safe_load(render_jinja(_file, salt_data))
|
||||
|
||||
|
||||
# Returns a dict from a class yaml definition
|
||||
def get_class(_class, salt_data):
|
||||
l_files = []
|
||||
saltclass_path = salt_data['path']
|
||||
|
||||
straight = '{0}/classes/{1}.yml'.format(saltclass_path, _class)
|
||||
sub_straight = '{0}/classes/{1}.yml'.format(saltclass_path,
|
||||
_class.replace('.', '/'))
|
||||
sub_init = '{0}/classes/{1}/init.yml'.format(saltclass_path,
|
||||
_class.replace('.', '/'))
|
||||
|
||||
for root, dirs, files in os.walk('{0}/classes'.format(saltclass_path)):
|
||||
for l_file in files:
|
||||
l_files.append('{0}/{1}'.format(root, l_file))
|
||||
|
||||
if straight in l_files:
|
||||
return render_yaml(straight, salt_data)
|
||||
|
||||
if sub_straight in l_files:
|
||||
return render_yaml(sub_straight, salt_data)
|
||||
|
||||
if sub_init in l_files:
|
||||
return render_yaml(sub_init, salt_data)
|
||||
|
||||
log.warning('{0}: Class definition not found'.format(_class))
|
||||
return {}
|
||||
|
||||
|
||||
# Return environment
|
||||
def get_env_from_dict(exp_dict_list):
|
||||
environment = ''
|
||||
for s_class in exp_dict_list:
|
||||
if 'environment' in s_class:
|
||||
environment = s_class['environment']
|
||||
return environment
|
||||
|
||||
|
||||
# Merge dict b into a
|
||||
def dict_merge(a, b, path=None):
|
||||
if path is None:
|
||||
path = []
|
||||
|
||||
for key in b:
|
||||
if key in a:
|
||||
if isinstance(a[key], list) and isinstance(b[key], list):
|
||||
if b[key][0] == '^':
|
||||
b[key].pop(0)
|
||||
a[key] = b[key]
|
||||
else:
|
||||
a[key].extend(b[key])
|
||||
elif isinstance(a[key], dict) and isinstance(b[key], dict):
|
||||
dict_merge(a[key], b[key], path + [str(key)])
|
||||
elif a[key] == b[key]:
|
||||
pass
|
||||
else:
|
||||
a[key] = b[key]
|
||||
else:
|
||||
a[key] = b[key]
|
||||
return a
|
||||
|
||||
|
||||
# Recursive search and replace in a dict
|
||||
def dict_search_and_replace(d, old, new, expanded):
|
||||
for (k, v) in iteritems(d):
|
||||
if isinstance(v, dict):
|
||||
dict_search_and_replace(d[k], old, new, expanded)
|
||||
if v == old:
|
||||
d[k] = new
|
||||
return d
|
||||
|
||||
|
||||
# Retrieve original value from ${xx:yy:zz} to be expanded
|
||||
def find_value_to_expand(x, v):
|
||||
a = x
|
||||
for i in v[2:-1].split(':'):
|
||||
if i in a:
|
||||
a = a.get(i)
|
||||
else:
|
||||
a = v
|
||||
return a
|
||||
return a
|
||||
|
||||
|
||||
# Return a dict that contains expanded variables if found
|
||||
def expand_variables(a, b, expanded, path=None):
|
||||
if path is None:
|
||||
b = a.copy()
|
||||
path = []
|
||||
|
||||
for (k, v) in iteritems(a):
|
||||
if isinstance(v, dict):
|
||||
expand_variables(v, b, expanded, path + [str(k)])
|
||||
else:
|
||||
if isinstance(v, str):
|
||||
vre = re.search(r'(^|.)\$\{.*?\}', v)
|
||||
if vre:
|
||||
re_v = vre.group(0)
|
||||
if re_v.startswith('\\'):
|
||||
v_new = v.replace(re_v, re_v.lstrip('\\'))
|
||||
b = dict_search_and_replace(b, v, v_new, expanded)
|
||||
expanded.append(k)
|
||||
elif not re_v.startswith('$'):
|
||||
v_expanded = find_value_to_expand(b, re_v[1:])
|
||||
v_new = v.replace(re_v[1:], v_expanded)
|
||||
b = dict_search_and_replace(b, v, v_new, expanded)
|
||||
expanded.append(k)
|
||||
else:
|
||||
v_expanded = find_value_to_expand(b, re_v)
|
||||
b = dict_search_and_replace(b, v, v_expanded, expanded)
|
||||
expanded.append(k)
|
||||
return b
|
||||
|
||||
|
||||
def expand_classes_in_order(minion_dict,
|
||||
salt_data,
|
||||
seen_classes,
|
||||
expanded_classes,
|
||||
classes_to_expand):
|
||||
# Get classes to expand from minion dictionnary
|
||||
if not classes_to_expand and 'classes' in minion_dict:
|
||||
classes_to_expand = minion_dict['classes']
|
||||
|
||||
# Now loop on list to recursively expand them
|
||||
for klass in classes_to_expand:
|
||||
if klass not in seen_classes:
|
||||
seen_classes.append(klass)
|
||||
expanded_classes[klass] = get_class(klass, salt_data)
|
||||
# Fix corner case where class is loaded but doesn't contain anything
|
||||
if expanded_classes[klass] is None:
|
||||
expanded_classes[klass] = {}
|
||||
# Now replace class element in classes_to_expand by expansion
|
||||
if 'classes' in expanded_classes[klass]:
|
||||
l_id = classes_to_expand.index(klass)
|
||||
classes_to_expand[l_id:l_id] = expanded_classes[klass]['classes']
|
||||
expand_classes_in_order(minion_dict,
|
||||
salt_data,
|
||||
seen_classes,
|
||||
expanded_classes,
|
||||
classes_to_expand)
|
||||
else:
|
||||
expand_classes_in_order(minion_dict,
|
||||
salt_data,
|
||||
seen_classes,
|
||||
expanded_classes,
|
||||
classes_to_expand)
|
||||
|
||||
# We may have duplicates here and we want to remove them
|
||||
tmp = []
|
||||
for t_element in classes_to_expand:
|
||||
if t_element not in tmp:
|
||||
tmp.append(t_element)
|
||||
|
||||
classes_to_expand = tmp
|
||||
|
||||
# Now that we've retrieved every class in order,
|
||||
# let's return an ordered list of dicts
|
||||
ord_expanded_classes = []
|
||||
ord_expanded_states = []
|
||||
for ord_klass in classes_to_expand:
|
||||
ord_expanded_classes.append(expanded_classes[ord_klass])
|
||||
# And be smart and sort out states list
|
||||
# Address the corner case where states is empty in a class definition
|
||||
if 'states' in expanded_classes[ord_klass] and expanded_classes[ord_klass]['states'] is None:
|
||||
expanded_classes[ord_klass]['states'] = {}
|
||||
|
||||
if 'states' in expanded_classes[ord_klass]:
|
||||
ord_expanded_states.extend(expanded_classes[ord_klass]['states'])
|
||||
|
||||
# Add our minion dict as final element but check if we have states to process
|
||||
if 'states' in minion_dict and minion_dict['states'] is None:
|
||||
minion_dict['states'] = []
|
||||
|
||||
if 'states' in minion_dict:
|
||||
ord_expanded_states.extend(minion_dict['states'])
|
||||
|
||||
ord_expanded_classes.append(minion_dict)
|
||||
|
||||
return ord_expanded_classes, classes_to_expand, ord_expanded_states
|
||||
|
||||
|
||||
def expanded_dict_from_minion(minion_id, salt_data):
|
||||
_file = ''
|
||||
saltclass_path = salt_data['path']
|
||||
# Start
|
||||
for root, dirs, files in os.walk('{0}/nodes'.format(saltclass_path)):
|
||||
for minion_file in files:
|
||||
if minion_file == '{0}.yml'.format(minion_id):
|
||||
_file = os.path.join(root, minion_file)
|
||||
|
||||
# Load the minion_id definition if existing, else an exmpty dict
|
||||
node_dict = {}
|
||||
if _file:
|
||||
node_dict[minion_id] = render_yaml(_file, salt_data)
|
||||
else:
|
||||
log.warning('{0}: Node definition not found'.format(minion_id))
|
||||
node_dict[minion_id] = {}
|
||||
|
||||
# Get 2 ordered lists:
|
||||
# expanded_classes: A list of all the dicts
|
||||
# classes_list: List of all the classes
|
||||
expanded_classes, classes_list, states_list = expand_classes_in_order(
|
||||
node_dict[minion_id],
|
||||
salt_data, [], {}, [])
|
||||
|
||||
# Here merge the pillars together
|
||||
pillars_dict = {}
|
||||
for exp_dict in expanded_classes:
|
||||
if 'pillars' in exp_dict:
|
||||
dict_merge(pillars_dict, exp_dict)
|
||||
|
||||
return expanded_classes, pillars_dict, classes_list, states_list
|
||||
|
||||
|
||||
def get_pillars(minion_id, salt_data):
|
||||
# Get 2 dicts and 2 lists
|
||||
# expanded_classes: Full list of expanded dicts
|
||||
# pillars_dict: dict containing merged pillars in order
|
||||
# classes_list: All classes processed in order
|
||||
# states_list: All states listed in order
|
||||
(expanded_classes,
|
||||
pillars_dict,
|
||||
classes_list,
|
||||
states_list) = expanded_dict_from_minion(minion_id, salt_data)
|
||||
|
||||
# Retrieve environment
|
||||
environment = get_env_from_dict(expanded_classes)
|
||||
|
||||
# Expand ${} variables in merged dict
|
||||
# pillars key shouldn't exist if we haven't found any minion_id ref
|
||||
if 'pillars' in pillars_dict:
|
||||
pillars_dict_expanded = expand_variables(pillars_dict['pillars'], {}, [])
|
||||
else:
|
||||
pillars_dict_expanded = expand_variables({}, {}, [])
|
||||
|
||||
# Build the final pillars dict
|
||||
pillars_dict = {}
|
||||
pillars_dict['__saltclass__'] = {}
|
||||
pillars_dict['__saltclass__']['states'] = states_list
|
||||
pillars_dict['__saltclass__']['classes'] = classes_list
|
||||
pillars_dict['__saltclass__']['environment'] = environment
|
||||
pillars_dict['__saltclass__']['nodename'] = minion_id
|
||||
pillars_dict.update(pillars_dict_expanded)
|
||||
|
||||
return pillars_dict
|
||||
|
||||
|
||||
def get_tops(minion_id, salt_data):
|
||||
# Get 2 dicts and 2 lists
|
||||
# expanded_classes: Full list of expanded dicts
|
||||
# pillars_dict: dict containing merged pillars in order
|
||||
# classes_list: All classes processed in order
|
||||
# states_list: All states listed in order
|
||||
(expanded_classes,
|
||||
pillars_dict,
|
||||
classes_list,
|
||||
states_list) = expanded_dict_from_minion(minion_id, salt_data)
|
||||
|
||||
# Retrieve environment
|
||||
environment = get_env_from_dict(expanded_classes)
|
||||
|
||||
# Build final top dict
|
||||
tops_dict = {}
|
||||
tops_dict[environment] = states_list
|
||||
|
||||
return tops_dict
|
|
@ -79,6 +79,8 @@ import atexit
|
|||
import errno
|
||||
import logging
|
||||
import time
|
||||
import sys
|
||||
import ssl
|
||||
|
||||
# Import Salt Libs
|
||||
import salt.exceptions
|
||||
|
@ -92,8 +94,9 @@ import salt.utils.stringutils
|
|||
from salt.ext import six
|
||||
from salt.ext.six.moves.http_client import BadStatusLine # pylint: disable=E0611
|
||||
try:
|
||||
from pyVim.connect import GetSi, SmartConnect, Disconnect, GetStub
|
||||
from pyVmomi import vim, vmodl
|
||||
from pyVim.connect import GetSi, SmartConnect, Disconnect, GetStub, \
|
||||
SoapStubAdapter
|
||||
from pyVmomi import vim, vmodl, VmomiSupport
|
||||
HAS_PYVMOMI = True
|
||||
except ImportError:
|
||||
HAS_PYVMOMI = False
|
||||
|
@ -405,6 +408,49 @@ def get_service_instance(host, username=None, password=None, protocol=None,
|
|||
return service_instance
|
||||
|
||||
|
||||
def get_new_service_instance_stub(service_instance, path, ns=None,
|
||||
version=None):
|
||||
'''
|
||||
Returns a stub that points to a different path,
|
||||
created from an existing connection.
|
||||
|
||||
service_instance
|
||||
The Service Instance.
|
||||
|
||||
path
|
||||
Path of the new stub.
|
||||
|
||||
ns
|
||||
Namespace of the new stub.
|
||||
Default value is None
|
||||
|
||||
version
|
||||
Version of the new stub.
|
||||
Default value is None.
|
||||
'''
|
||||
#For python 2.7.9 and later, the defaul SSL conext has more strict
|
||||
#connection handshaking rule. We may need turn of the hostname checking
|
||||
#and client side cert verification
|
||||
context = None
|
||||
if sys.version_info[:3] > (2, 7, 8):
|
||||
context = ssl.create_default_context()
|
||||
context.check_hostname = False
|
||||
context.verify_mode = ssl.CERT_NONE
|
||||
|
||||
stub = service_instance._stub
|
||||
hostname = stub.host.split(':')[0]
|
||||
session_cookie = stub.cookie.split('"')[1]
|
||||
VmomiSupport.GetRequestContext()['vcSessionCookie'] = session_cookie
|
||||
new_stub = SoapStubAdapter(host=hostname,
|
||||
ns=ns,
|
||||
path=path,
|
||||
version=version,
|
||||
poolSize=0,
|
||||
sslContext=context)
|
||||
new_stub.cookie = stub.cookie
|
||||
return new_stub
|
||||
|
||||
|
||||
def get_service_instance_from_managed_object(mo_ref, name='<unnamed>'):
|
||||
'''
|
||||
Retrieves the service instance from a managed object.
|
||||
|
@ -1863,7 +1909,7 @@ def get_datastores(service_instance, reference, datastore_names=None,
|
|||
'is set'.format(reference.__class__.__name__))
|
||||
if (not get_all_datastores) and backing_disk_ids:
|
||||
# At this point we know the reference is a vim.HostSystem
|
||||
log.debug('Filtering datastores with backing disk ids: {}'
|
||||
log.trace('Filtering datastores with backing disk ids: {}'
|
||||
''.format(backing_disk_ids))
|
||||
storage_system = get_storage_system(service_instance, reference,
|
||||
obj_name)
|
||||
|
@ -1879,11 +1925,11 @@ def get_datastores(service_instance, reference, datastore_names=None,
|
|||
# Skip volume if it doesn't contain an extent with a
|
||||
# canonical name of interest
|
||||
continue
|
||||
log.debug('Found datastore \'{0}\' for disk id(s) \'{1}\''
|
||||
log.trace('Found datastore \'{0}\' for disk id(s) \'{1}\''
|
||||
''.format(vol.name,
|
||||
[e.diskName for e in vol.extent]))
|
||||
disk_datastores.append(vol.name)
|
||||
log.debug('Datastore found for disk filter: {}'
|
||||
log.trace('Datastore found for disk filter: {}'
|
||||
''.format(disk_datastores))
|
||||
if datastore_names:
|
||||
datastore_names.extend(disk_datastores)
|
||||
|
@ -1960,7 +2006,7 @@ def rename_datastore(datastore_ref, new_datastore_name):
|
|||
New datastore name
|
||||
'''
|
||||
ds_name = get_managed_object_name(datastore_ref)
|
||||
log.debug('Renaming datastore \'{0}\' to '
|
||||
log.trace('Renaming datastore \'{0}\' to '
|
||||
'\'{1}\''.format(ds_name, new_datastore_name))
|
||||
try:
|
||||
datastore_ref.RenameDatastore(new_datastore_name)
|
||||
|
@ -2002,6 +2048,224 @@ def get_storage_system(service_instance, host_ref, hostname=None):
|
|||
return objs[0]['object']
|
||||
|
||||
|
||||
def _get_partition_info(storage_system, device_path):
|
||||
'''
|
||||
Returns partition informations for a device path, of type
|
||||
vim.HostDiskPartitionInfo
|
||||
'''
|
||||
try:
|
||||
partition_infos = \
|
||||
storage_system.RetrieveDiskPartitionInfo(
|
||||
devicePath=[device_path])
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
log.trace('partition_info = {0}'.format(partition_infos[0]))
|
||||
return partition_infos[0]
|
||||
|
||||
|
||||
def _get_new_computed_partition_spec(hostname, storage_system, device_path,
|
||||
partition_info):
|
||||
'''
|
||||
Computes the new disk partition info when adding a new vmfs partition that
|
||||
uses up the remainder of the disk; returns a tuple
|
||||
(new_partition_number, vim.HostDiskPartitionSpec
|
||||
'''
|
||||
log.trace('Adding a partition at the end of the disk and getting the new '
|
||||
'computed partition spec')
|
||||
#TODO implement support for multiple partitions
|
||||
# We support adding a partition add the end of the disk with partitions
|
||||
free_partitions = [p for p in partition_info.layout.partition
|
||||
if p.type == 'none']
|
||||
if not free_partitions:
|
||||
raise salt.exceptions.VMwareObjectNotFoundError(
|
||||
'Free partition was not found on device \'{0}\''
|
||||
''.format(partition_info.deviceName))
|
||||
free_partition = free_partitions[0]
|
||||
|
||||
# Create a layout object that copies the existing one
|
||||
layout = vim.HostDiskPartitionLayout(
|
||||
total=partition_info.layout.total,
|
||||
partition=partition_info.layout.partition)
|
||||
# Create a partition with the free space on the disk
|
||||
# Change the free partition type to vmfs
|
||||
free_partition.type = 'vmfs'
|
||||
try:
|
||||
computed_partition_info = storage_system.ComputeDiskPartitionInfo(
|
||||
devicePath=device_path,
|
||||
partitionFormat=vim.HostDiskPartitionInfoPartitionFormat.gpt,
|
||||
layout=layout)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
log.trace('computed partition info = {0}'
|
||||
''.format(computed_partition_info))
|
||||
log.trace('Retrieving new partition number')
|
||||
partition_numbers = [p.partition for p in
|
||||
computed_partition_info.layout.partition
|
||||
if (p.start.block == free_partition.start.block or
|
||||
# XXX If the entire disk is free (i.e. the free
|
||||
# disk partition starts at block 0) the newily
|
||||
# created partition is created from block 1
|
||||
(free_partition.start.block == 0 and
|
||||
p.start.block == 1)) and
|
||||
p.end.block == free_partition.end.block and
|
||||
p.type == 'vmfs']
|
||||
if not partition_numbers:
|
||||
raise salt.exceptions.VMwareNotFoundError(
|
||||
'New partition was not found in computed partitions of device '
|
||||
'\'{0}\''.format(partition_info.deviceName))
|
||||
log.trace('new partition number = {0}'.format(partition_numbers[0]))
|
||||
return (partition_numbers[0], computed_partition_info.spec)
|
||||
|
||||
|
||||
def create_vmfs_datastore(host_ref, datastore_name, disk_ref,
|
||||
vmfs_major_version, storage_system=None):
|
||||
'''
|
||||
Creates a VMFS datastore from a disk_id
|
||||
|
||||
host_ref
|
||||
vim.HostSystem object referencing a host to create the datastore on
|
||||
|
||||
datastore_name
|
||||
Name of the datastore
|
||||
|
||||
disk_ref
|
||||
vim.HostScsiDislk on which the datastore is created
|
||||
|
||||
vmfs_major_version
|
||||
VMFS major version to use
|
||||
'''
|
||||
# TODO Support variable sized partitions
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
disk_id = disk_ref.canonicalName
|
||||
log.debug('Creating datastore \'{0}\' on host \'{1}\', scsi disk \'{2}\', '
|
||||
'vmfs v{3}'.format(datastore_name, hostname, disk_id,
|
||||
vmfs_major_version))
|
||||
if not storage_system:
|
||||
si = get_service_instance_from_managed_object(host_ref, name=hostname)
|
||||
storage_system = get_storage_system(si, host_ref, hostname)
|
||||
|
||||
target_disk = disk_ref
|
||||
partition_info = _get_partition_info(storage_system,
|
||||
target_disk.devicePath)
|
||||
log.trace('partition_info = {0}'.format(partition_info))
|
||||
new_partition_number, partition_spec = _get_new_computed_partition_spec(
|
||||
hostname, storage_system, target_disk.devicePath, partition_info)
|
||||
spec = vim.VmfsDatastoreCreateSpec(
|
||||
vmfs=vim.HostVmfsSpec(
|
||||
majorVersion=vmfs_major_version,
|
||||
volumeName=datastore_name,
|
||||
extent=vim.HostScsiDiskPartition(
|
||||
diskName=disk_id,
|
||||
partition=new_partition_number)),
|
||||
diskUuid=target_disk.uuid,
|
||||
partition=partition_spec)
|
||||
try:
|
||||
ds_ref = \
|
||||
host_ref.configManager.datastoreSystem.CreateVmfsDatastore(spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
log.debug('Created datastore \'{0}\' on host '
|
||||
'\'{1}\''.format(datastore_name, hostname))
|
||||
return ds_ref
|
||||
|
||||
|
||||
def get_host_datastore_system(host_ref, hostname=None):
|
||||
'''
|
||||
Returns a host's datastore system
|
||||
|
||||
host_ref
|
||||
Reference to the ESXi host
|
||||
|
||||
hostname
|
||||
Name of the host. This argument is optional.
|
||||
'''
|
||||
|
||||
if not hostname:
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
service_instance = get_service_instance_from_managed_object(host_ref)
|
||||
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
|
||||
path='configManager.datastoreSystem',
|
||||
type=vim.HostSystem,
|
||||
skip=False)
|
||||
objs = get_mors_with_properties(service_instance,
|
||||
vim.HostDatastoreSystem,
|
||||
property_list=['datastore'],
|
||||
container_ref=host_ref,
|
||||
traversal_spec=traversal_spec)
|
||||
if not objs:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'Host\'s \'{0}\' datastore system was not retrieved'
|
||||
''.format(hostname))
|
||||
log.trace('[{0}] Retrieved datastore system'.format(hostname))
|
||||
return objs[0]['object']
|
||||
|
||||
|
||||
def remove_datastore(service_instance, datastore_ref):
|
||||
'''
|
||||
Creates a VMFS datastore from a disk_id
|
||||
|
||||
service_instance
|
||||
The Service Instance Object containing the datastore
|
||||
|
||||
datastore_ref
|
||||
The reference to the datastore to remove
|
||||
'''
|
||||
ds_props = get_properties_of_managed_object(
|
||||
datastore_ref, ['host', 'info', 'name'])
|
||||
ds_name = ds_props['name']
|
||||
log.debug('Removing datastore \'{}\''.format(ds_name))
|
||||
ds_info = ds_props['info']
|
||||
ds_hosts = ds_props.get('host')
|
||||
if not ds_hosts:
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Datastore \'{0}\' can\'t be removed. No '
|
||||
'attached hosts found'.format(ds_name))
|
||||
hostname = get_managed_object_name(ds_hosts[0].key)
|
||||
host_ds_system = get_host_datastore_system(ds_hosts[0].key,
|
||||
hostname=hostname)
|
||||
try:
|
||||
host_ds_system.RemoveDatastore(datastore_ref)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
log.trace('[{0}] Removed datastore \'{1}\''.format(hostname, ds_name))
|
||||
|
||||
|
||||
def get_hosts(service_instance, datacenter_name=None, host_names=None,
|
||||
cluster_name=None, get_all_hosts=False):
|
||||
'''
|
||||
|
@ -2026,44 +2290,541 @@ def get_hosts(service_instance, datacenter_name=None, host_names=None,
|
|||
Default value is False.
|
||||
'''
|
||||
properties = ['name']
|
||||
if cluster_name and not datacenter_name:
|
||||
raise salt.exceptions.ArgumentValueError(
|
||||
'Must specify the datacenter when specifying the cluster')
|
||||
if not host_names:
|
||||
host_names = []
|
||||
if cluster_name:
|
||||
properties.append('parent')
|
||||
if datacenter_name:
|
||||
if not datacenter_name:
|
||||
# Assume the root folder is the starting point
|
||||
start_point = get_root_folder(service_instance)
|
||||
else:
|
||||
start_point = get_datacenter(service_instance, datacenter_name)
|
||||
if cluster_name:
|
||||
# Retrieval to test if cluster exists. Cluster existence only makes
|
||||
# sense if the cluster has been specified
|
||||
# sense if the datacenter has been specified
|
||||
cluster = get_cluster(start_point, cluster_name)
|
||||
else:
|
||||
# Assume the root folder is the starting point
|
||||
start_point = get_root_folder(service_instance)
|
||||
properties.append('parent')
|
||||
|
||||
# Search for the objects
|
||||
hosts = get_mors_with_properties(service_instance,
|
||||
vim.HostSystem,
|
||||
container_ref=start_point,
|
||||
property_list=properties)
|
||||
log.trace('Retrieved hosts: {0}'.format(h['name'] for h in hosts))
|
||||
filtered_hosts = []
|
||||
for h in hosts:
|
||||
# Complex conditions checking if a host should be added to the
|
||||
# filtered list (either due to its name and/or cluster membership)
|
||||
name_condition = get_all_hosts or (h['name'] in host_names)
|
||||
# the datacenter_name needs to be set in order for the cluster
|
||||
# condition membership to be checked, otherwise the condition is
|
||||
# ignored
|
||||
cluster_condition = \
|
||||
(not datacenter_name or not cluster_name or
|
||||
(isinstance(h['parent'], vim.ClusterComputeResource) and
|
||||
h['parent'].name == cluster_name))
|
||||
|
||||
if name_condition and cluster_condition:
|
||||
if cluster_name:
|
||||
if not isinstance(h['parent'], vim.ClusterComputeResource):
|
||||
continue
|
||||
parent_name = get_managed_object_name(h['parent'])
|
||||
if parent_name != cluster_name:
|
||||
continue
|
||||
|
||||
if get_all_hosts:
|
||||
filtered_hosts.append(h['object'])
|
||||
continue
|
||||
|
||||
if h['name'] in host_names:
|
||||
filtered_hosts.append(h['object'])
|
||||
return filtered_hosts
|
||||
|
||||
|
||||
def _get_scsi_address_to_lun_key_map(service_instance,
|
||||
host_ref,
|
||||
storage_system=None,
|
||||
hostname=None):
|
||||
'''
|
||||
Returns a map between the scsi addresses and the keys of all luns on an ESXi
|
||||
host.
|
||||
map[<scsi_address>] = <lun key>
|
||||
|
||||
service_instance
|
||||
The Service Instance Object from which to obtain the hosts
|
||||
|
||||
host_ref
|
||||
The vim.HostSystem object representing the host that contains the
|
||||
requested disks.
|
||||
|
||||
storage_system
|
||||
The host's storage system. Default is None.
|
||||
|
||||
hostname
|
||||
Name of the host. Default is None.
|
||||
'''
|
||||
map = {}
|
||||
if not hostname:
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
if not storage_system:
|
||||
storage_system = get_storage_system(service_instance, host_ref,
|
||||
hostname)
|
||||
try:
|
||||
device_info = storage_system.storageDeviceInfo
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
if not device_info:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'Host\'s \'{0}\' storage device '
|
||||
'info was not retrieved'.format(hostname))
|
||||
multipath_info = device_info.multipathInfo
|
||||
if not multipath_info:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'Host\'s \'{0}\' multipath info was not retrieved'
|
||||
''.format(hostname))
|
||||
if multipath_info.lun is None:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'No luns were retrieved from host \'{0}\''.format(hostname))
|
||||
lun_key_by_scsi_addr = {}
|
||||
for l in multipath_info.lun:
|
||||
# The vmware scsi_address may have multiple comma separated values
|
||||
# The first one is the actual scsi address
|
||||
lun_key_by_scsi_addr.update({p.name.split(',')[0]: l.lun
|
||||
for p in l.path})
|
||||
log.trace('Scsi address to lun id map on host \'{0}\': '
|
||||
'{1}'.format(hostname, lun_key_by_scsi_addr))
|
||||
return lun_key_by_scsi_addr
|
||||
|
||||
|
||||
def get_all_luns(host_ref, storage_system=None, hostname=None):
|
||||
'''
|
||||
Returns a list of all vim.HostScsiDisk objects in a disk
|
||||
|
||||
host_ref
|
||||
The vim.HostSystem object representing the host that contains the
|
||||
requested disks.
|
||||
|
||||
storage_system
|
||||
The host's storage system. Default is None.
|
||||
|
||||
hostname
|
||||
Name of the host. This argument is optional.
|
||||
'''
|
||||
if not hostname:
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
if not storage_system:
|
||||
si = get_service_instance_from_managed_object(host_ref, name=hostname)
|
||||
storage_system = get_storage_system(si, host_ref, hostname)
|
||||
if not storage_system:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'Host\'s \'{0}\' storage system was not retrieved'
|
||||
''.format(hostname))
|
||||
try:
|
||||
device_info = storage_system.storageDeviceInfo
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
if not device_info:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'Host\'s \'{0}\' storage device info was not retrieved'
|
||||
''.format(hostname))
|
||||
|
||||
scsi_luns = device_info.scsiLun
|
||||
if scsi_luns:
|
||||
log.trace('Retrieved scsi luns in host \'{0}\': {1}'
|
||||
''.format(hostname, [l.canonicalName for l in scsi_luns]))
|
||||
return scsi_luns
|
||||
log.trace('Retrieved no scsi_luns in host \'{0}\''.format(hostname))
|
||||
return []
|
||||
|
||||
|
||||
def get_scsi_address_to_lun_map(host_ref, storage_system=None, hostname=None):
|
||||
'''
|
||||
Returns a map of all vim.ScsiLun objects on a ESXi host keyed by their
|
||||
scsi address
|
||||
|
||||
host_ref
|
||||
The vim.HostSystem object representing the host that contains the
|
||||
requested disks.
|
||||
|
||||
storage_system
|
||||
The host's storage system. Default is None.
|
||||
|
||||
hostname
|
||||
Name of the host. This argument is optional.
|
||||
'''
|
||||
if not hostname:
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
si = get_service_instance_from_managed_object(host_ref, name=hostname)
|
||||
if not storage_system:
|
||||
storage_system = get_storage_system(si, host_ref, hostname)
|
||||
lun_ids_to_scsi_addr_map = \
|
||||
_get_scsi_address_to_lun_key_map(si, host_ref, storage_system,
|
||||
hostname)
|
||||
luns_to_key_map = {d.key: d for d in
|
||||
get_all_luns(host_ref, storage_system, hostname)}
|
||||
return {scsi_addr: luns_to_key_map[lun_key] for scsi_addr, lun_key in
|
||||
six.iteritems(lun_ids_to_scsi_addr_map)}
|
||||
|
||||
|
||||
def get_disks(host_ref, disk_ids=None, scsi_addresses=None,
|
||||
get_all_disks=False):
|
||||
'''
|
||||
Returns a list of vim.HostScsiDisk objects representing disks
|
||||
in a ESXi host, filtered by their cannonical names and scsi_addresses
|
||||
|
||||
host_ref
|
||||
The vim.HostSystem object representing the host that contains the
|
||||
requested disks.
|
||||
|
||||
disk_ids
|
||||
The list of canonical names of the disks to be retrieved. Default value
|
||||
is None
|
||||
|
||||
scsi_addresses
|
||||
The list of scsi addresses of the disks to be retrieved. Default value
|
||||
is None
|
||||
|
||||
get_all_disks
|
||||
Specifies whether to retrieve all disks in the host.
|
||||
Default value is False.
|
||||
'''
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
if get_all_disks:
|
||||
log.trace('Retrieving all disks in host \'{0}\''.format(hostname))
|
||||
else:
|
||||
log.trace('Retrieving disks in host \'{0}\': ids = ({1}); scsi '
|
||||
'addresses = ({2})'.format(hostname, disk_ids,
|
||||
scsi_addresses))
|
||||
if not (disk_ids or scsi_addresses):
|
||||
return []
|
||||
si = get_service_instance_from_managed_object(host_ref, name=hostname)
|
||||
storage_system = get_storage_system(si, host_ref, hostname)
|
||||
disk_keys = []
|
||||
if scsi_addresses:
|
||||
# convert the scsi addresses to disk keys
|
||||
lun_key_by_scsi_addr = _get_scsi_address_to_lun_key_map(si, host_ref,
|
||||
storage_system,
|
||||
hostname)
|
||||
disk_keys = [key for scsi_addr, key
|
||||
in six.iteritems(lun_key_by_scsi_addr)
|
||||
if scsi_addr in scsi_addresses]
|
||||
log.trace('disk_keys based on scsi_addresses = {0}'.format(disk_keys))
|
||||
|
||||
scsi_luns = get_all_luns(host_ref, storage_system)
|
||||
scsi_disks = [disk for disk in scsi_luns
|
||||
if isinstance(disk, vim.HostScsiDisk) and (
|
||||
get_all_disks or
|
||||
# Filter by canonical name
|
||||
(disk_ids and (disk.canonicalName in disk_ids)) or
|
||||
# Filter by disk keys from scsi addresses
|
||||
(disk.key in disk_keys))]
|
||||
log.trace('Retrieved disks in host \'{0}\': {1}'
|
||||
''.format(hostname, [d.canonicalName for d in scsi_disks]))
|
||||
return scsi_disks
|
||||
|
||||
|
||||
def get_disk_partition_info(host_ref, disk_id, storage_system=None):
|
||||
'''
|
||||
Returns all partitions on a disk
|
||||
|
||||
host_ref
|
||||
The reference of the ESXi host containing the disk
|
||||
|
||||
disk_id
|
||||
The canonical name of the disk whose partitions are to be removed
|
||||
|
||||
storage_system
|
||||
The ESXi host's storage system. Default is None.
|
||||
'''
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
service_instance = get_service_instance_from_managed_object(host_ref)
|
||||
if not storage_system:
|
||||
storage_system = get_storage_system(service_instance, host_ref,
|
||||
hostname)
|
||||
|
||||
props = get_properties_of_managed_object(storage_system,
|
||||
['storageDeviceInfo.scsiLun'])
|
||||
if not props.get('storageDeviceInfo.scsiLun'):
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'No devices were retrieved in host \'{0}\''.format(hostname))
|
||||
log.trace('[{0}] Retrieved {1} devices: {2}'.format(
|
||||
hostname, len(props['storageDeviceInfo.scsiLun']),
|
||||
', '.join([l.canonicalName
|
||||
for l in props['storageDeviceInfo.scsiLun']])))
|
||||
disks = [l for l in props['storageDeviceInfo.scsiLun']
|
||||
if isinstance(l, vim.HostScsiDisk) and
|
||||
l.canonicalName == disk_id]
|
||||
if not disks:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'Disk \'{0}\' was not found in host \'{1}\''
|
||||
''.format(disk_id, hostname))
|
||||
log.trace('[{0}] device_path = {1}'.format(hostname, disks[0].devicePath))
|
||||
partition_info = _get_partition_info(storage_system, disks[0].devicePath)
|
||||
log.trace('[{0}] Retrieved {1} partition(s) on disk \'{2}\''
|
||||
''.format(hostname, len(partition_info.spec.partition), disk_id))
|
||||
return partition_info
|
||||
|
||||
|
||||
def erase_disk_partitions(service_instance, host_ref, disk_id,
|
||||
hostname=None, storage_system=None):
|
||||
'''
|
||||
Erases all partitions on a disk
|
||||
|
||||
in a vcenter filtered by their names and/or datacenter, cluster membership
|
||||
|
||||
service_instance
|
||||
The Service Instance Object from which to obtain all information
|
||||
|
||||
host_ref
|
||||
The reference of the ESXi host containing the disk
|
||||
|
||||
disk_id
|
||||
The canonical name of the disk whose partitions are to be removed
|
||||
|
||||
hostname
|
||||
The ESXi hostname. Default is None.
|
||||
|
||||
storage_system
|
||||
The ESXi host's storage system. Default is None.
|
||||
'''
|
||||
|
||||
if not hostname:
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
if not storage_system:
|
||||
storage_system = get_storage_system(service_instance, host_ref,
|
||||
hostname)
|
||||
|
||||
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
|
||||
path='configManager.storageSystem',
|
||||
type=vim.HostSystem,
|
||||
skip=False)
|
||||
results = get_mors_with_properties(service_instance,
|
||||
vim.HostStorageSystem,
|
||||
['storageDeviceInfo.scsiLun'],
|
||||
container_ref=host_ref,
|
||||
traversal_spec=traversal_spec)
|
||||
if not results:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'Host\'s \'{0}\' devices were not retrieved'.format(hostname))
|
||||
log.trace('[{0}] Retrieved {1} devices: {2}'.format(
|
||||
hostname, len(results[0].get('storageDeviceInfo.scsiLun', [])),
|
||||
', '.join([l.canonicalName for l in
|
||||
results[0].get('storageDeviceInfo.scsiLun', [])])))
|
||||
disks = [l for l in results[0].get('storageDeviceInfo.scsiLun', [])
|
||||
if isinstance(l, vim.HostScsiDisk) and
|
||||
l.canonicalName == disk_id]
|
||||
if not disks:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'Disk \'{0}\' was not found in host \'{1}\''
|
||||
''.format(disk_id, hostname))
|
||||
log.trace('[{0}] device_path = {1}'.format(hostname, disks[0].devicePath))
|
||||
# Erase the partitions by setting an empty partition spec
|
||||
try:
|
||||
storage_system.UpdateDiskPartitions(disks[0].devicePath,
|
||||
vim.HostDiskPartitionSpec())
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
log.trace('[{0}] Erased partitions on disk \'{1}\''
|
||||
''.format(hostname, disk_id))
|
||||
|
||||
|
||||
def get_diskgroups(host_ref, cache_disk_ids=None, get_all_disk_groups=False):
|
||||
'''
|
||||
Returns a list of vim.VsanHostDiskMapping objects representing disks
|
||||
in a ESXi host, filtered by their cannonical names.
|
||||
|
||||
host_ref
|
||||
The vim.HostSystem object representing the host that contains the
|
||||
requested disks.
|
||||
|
||||
cache_disk_ids
|
||||
The list of cannonical names of the cache disks to be retrieved. The
|
||||
canonical name of the cache disk is enough to identify the disk group
|
||||
because it is guaranteed to have one and only one cache disk.
|
||||
Default is None.
|
||||
|
||||
get_all_disk_groups
|
||||
Specifies whether to retrieve all disks groups in the host.
|
||||
Default value is False.
|
||||
'''
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
if get_all_disk_groups:
|
||||
log.trace('Retrieving all disk groups on host \'{0}\''
|
||||
''.format(hostname))
|
||||
else:
|
||||
log.trace('Retrieving disk groups from host \'{0}\', with cache disk '
|
||||
'ids : ({1})'.format(hostname, cache_disk_ids))
|
||||
if not cache_disk_ids:
|
||||
return []
|
||||
try:
|
||||
vsan_host_config = host_ref.config.vsanHostConfig
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
if not vsan_host_config:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'No host config found on host \'{0}\''.format(hostname))
|
||||
vsan_storage_info = vsan_host_config.storageInfo
|
||||
if not vsan_storage_info:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'No vsan storage info found on host \'{0}\''.format(hostname))
|
||||
vsan_disk_mappings = vsan_storage_info.diskMapping
|
||||
if not vsan_disk_mappings:
|
||||
return []
|
||||
disk_groups = [dm for dm in vsan_disk_mappings if
|
||||
(get_all_disk_groups or
|
||||
(dm.ssd.canonicalName in cache_disk_ids))]
|
||||
log.trace('Retrieved disk groups on host \'{0}\', with cache disk ids : '
|
||||
'{1}'.format(hostname,
|
||||
[d.ssd.canonicalName for d in disk_groups]))
|
||||
return disk_groups
|
||||
|
||||
|
||||
def _check_disks_in_diskgroup(disk_group, cache_disk_id, capacity_disk_ids):
|
||||
'''
|
||||
Checks that the disks in a disk group are as expected and raises
|
||||
CheckError exceptions if the check fails
|
||||
'''
|
||||
if not disk_group.ssd.canonicalName == cache_disk_id:
|
||||
raise salt.exceptions.ArgumentValueError(
|
||||
'Incorrect diskgroup cache disk; got id: \'{0}\'; expected id: '
|
||||
'\'{1}\''.format(disk_group.ssd.canonicalName, cache_disk_id))
|
||||
if sorted([d.canonicalName for d in disk_group.nonSsd]) != \
|
||||
sorted(capacity_disk_ids):
|
||||
|
||||
raise salt.exceptions.ArgumentValueError(
|
||||
'Incorrect capacity disks; got ids: \'{0}\'; expected ids: \'{1}\''
|
||||
''.format(sorted([d.canonicalName for d in disk_group.nonSsd]),
|
||||
sorted(capacity_disk_ids)))
|
||||
log.trace('Checked disks in diskgroup with cache disk id \'{0}\''
|
||||
''.format(cache_disk_id))
|
||||
return True
|
||||
|
||||
|
||||
#TODO Support host caches on multiple datastores
|
||||
def get_host_cache(host_ref, host_cache_manager=None):
|
||||
'''
|
||||
Returns a vim.HostScsiDisk if the host cache is configured on the specified
|
||||
host, other wise returns None
|
||||
|
||||
host_ref
|
||||
The vim.HostSystem object representing the host that contains the
|
||||
requested disks.
|
||||
|
||||
host_cache_manager
|
||||
The vim.HostCacheConfigurationManager object representing the cache
|
||||
configuration manager on the specified host. Default is None. If None,
|
||||
it will be retrieved in the method
|
||||
'''
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
service_instance = get_service_instance_from_managed_object(host_ref)
|
||||
log.trace('Retrieving the host cache on host \'{0}\''.format(hostname))
|
||||
if not host_cache_manager:
|
||||
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
|
||||
path='configManager.cacheConfigurationManager',
|
||||
type=vim.HostSystem,
|
||||
skip=False)
|
||||
results = get_mors_with_properties(service_instance,
|
||||
vim.HostCacheConfigurationManager,
|
||||
['cacheConfigurationInfo'],
|
||||
container_ref=host_ref,
|
||||
traversal_spec=traversal_spec)
|
||||
if not results or not results[0].get('cacheConfigurationInfo'):
|
||||
log.trace('Host \'{0}\' has no host cache'.format(hostname))
|
||||
return None
|
||||
return results[0]['cacheConfigurationInfo'][0]
|
||||
else:
|
||||
results = get_properties_of_managed_object(host_cache_manager,
|
||||
['cacheConfigurationInfo'])
|
||||
if not results:
|
||||
log.trace('Host \'{0}\' has no host cache'.format(hostname))
|
||||
return None
|
||||
return results['cacheConfigurationInfo'][0]
|
||||
|
||||
|
||||
#TODO Support host caches on multiple datastores
|
||||
def configure_host_cache(host_ref, datastore_ref, swap_size_MiB,
|
||||
host_cache_manager=None):
|
||||
'''
|
||||
Configures the host cahe of the specified host
|
||||
|
||||
host_ref
|
||||
The vim.HostSystem object representing the host that contains the
|
||||
requested disks.
|
||||
|
||||
datastore_ref
|
||||
The vim.Datastore opject representing the datastore the host cache will
|
||||
be configured on.
|
||||
|
||||
swap_size_MiB
|
||||
The size in Mibibytes of the swap.
|
||||
|
||||
host_cache_manager
|
||||
The vim.HostCacheConfigurationManager object representing the cache
|
||||
configuration manager on the specified host. Default is None. If None,
|
||||
it will be retrieved in the method
|
||||
'''
|
||||
hostname = get_managed_object_name(host_ref)
|
||||
if not host_cache_manager:
|
||||
props = get_properties_of_managed_object(
|
||||
host_ref, ['configManager.cacheConfigurationManager'])
|
||||
if not props.get('configManager.cacheConfigurationManager'):
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'Host \'{0}\' has no host cache'.format(hostname))
|
||||
host_cache_manager = props['configManager.cacheConfigurationManager']
|
||||
log.trace('Configuring the host cache on host \'{0}\', datastore \'{1}\', '
|
||||
'swap size={2} MiB'.format(hostname, datastore_ref.name,
|
||||
swap_size_MiB))
|
||||
|
||||
spec = vim.HostCacheConfigurationSpec(
|
||||
datastore=datastore_ref,
|
||||
swapSize=swap_size_MiB)
|
||||
log.trace('host_cache_spec={0}'.format(spec))
|
||||
try:
|
||||
task = host_cache_manager.ConfigureHostCache_Task(spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
wait_for_task(task, hostname, 'HostCacheConfigurationTask')
|
||||
log.trace('Configured host cache on host \'{0}\''.format(hostname))
|
||||
return True
|
||||
|
||||
|
||||
def list_hosts(service_instance):
|
||||
'''
|
||||
Returns a list of hosts associated with a given service instance.
|
||||
|
|
|
@ -49,7 +49,8 @@ import logging
|
|||
import ssl
|
||||
|
||||
# Import Salt Libs
|
||||
from salt.exceptions import VMwareApiError, VMwareRuntimeError
|
||||
from salt.exceptions import VMwareApiError, VMwareRuntimeError, \
|
||||
VMwareObjectRetrievalError
|
||||
import salt.utils.vmware
|
||||
|
||||
try:
|
||||
|
@ -129,6 +130,308 @@ def get_vsan_cluster_config_system(service_instance):
|
|||
return vc_mos['vsan-cluster-config-system']
|
||||
|
||||
|
||||
def get_vsan_disk_management_system(service_instance):
|
||||
'''
|
||||
Returns a vim.VimClusterVsanVcDiskManagementSystem object
|
||||
|
||||
service_instance
|
||||
Service instance to the host or vCenter
|
||||
'''
|
||||
|
||||
#TODO Replace when better connection mechanism is available
|
||||
|
||||
#For python 2.7.9 and later, the defaul SSL conext has more strict
|
||||
#connection handshaking rule. We may need turn of the hostname checking
|
||||
#and client side cert verification
|
||||
context = None
|
||||
if sys.version_info[:3] > (2, 7, 8):
|
||||
context = ssl.create_default_context()
|
||||
context.check_hostname = False
|
||||
context.verify_mode = ssl.CERT_NONE
|
||||
|
||||
stub = service_instance._stub
|
||||
vc_mos = vsanapiutils.GetVsanVcMos(stub, context=context)
|
||||
return vc_mos['vsan-disk-management-system']
|
||||
|
||||
|
||||
def get_host_vsan_system(service_instance, host_ref, hostname=None):
|
||||
'''
|
||||
Returns a host's vsan system
|
||||
|
||||
service_instance
|
||||
Service instance to the host or vCenter
|
||||
|
||||
host_ref
|
||||
Refernce to ESXi host
|
||||
|
||||
hostname
|
||||
Name of ESXi host. Default value is None.
|
||||
'''
|
||||
if not hostname:
|
||||
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
|
||||
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
|
||||
path='configManager.vsanSystem',
|
||||
type=vim.HostSystem,
|
||||
skip=False)
|
||||
objs = salt.utils.vmware.get_mors_with_properties(
|
||||
service_instance, vim.HostVsanSystem, property_list=['config.enabled'],
|
||||
container_ref=host_ref, traversal_spec=traversal_spec)
|
||||
if not objs:
|
||||
raise VMwareObjectRetrievalError('Host\'s \'{0}\' VSAN system was '
|
||||
'not retrieved'.format(hostname))
|
||||
log.trace('[{0}] Retrieved VSAN system'.format(hostname))
|
||||
return objs[0]['object']
|
||||
|
||||
|
||||
def create_diskgroup(service_instance, vsan_disk_mgmt_system,
|
||||
host_ref, cache_disk, capacity_disks):
|
||||
'''
|
||||
Creates a disk group
|
||||
|
||||
service_instance
|
||||
Service instance to the host or vCenter
|
||||
|
||||
vsan_disk_mgmt_system
|
||||
vim.VimClusterVsanVcDiskManagemenetSystem representing the vSan disk
|
||||
management system retrieved from the vsan endpoint.
|
||||
|
||||
host_ref
|
||||
vim.HostSystem object representing the target host the disk group will
|
||||
be created on
|
||||
|
||||
cache_disk
|
||||
The vim.HostScsidisk to be used as a cache disk. It must be an ssd disk.
|
||||
|
||||
capacity_disks
|
||||
List of vim.HostScsiDisk objects representing of disks to be used as
|
||||
capacity disks. Can be either ssd or non-ssd. There must be a minimum
|
||||
of 1 capacity disk in the list.
|
||||
'''
|
||||
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
|
||||
cache_disk_id = cache_disk.canonicalName
|
||||
log.debug('Creating a new disk group with cache disk \'{0}\' on host '
|
||||
'\'{1}\''.format(cache_disk_id, hostname))
|
||||
log.trace('capacity_disk_ids = {0}'.format([c.canonicalName for c in
|
||||
capacity_disks]))
|
||||
spec = vim.VimVsanHostDiskMappingCreationSpec()
|
||||
spec.cacheDisks = [cache_disk]
|
||||
spec.capacityDisks = capacity_disks
|
||||
# All capacity disks must be either ssd or non-ssd (mixed disks are not
|
||||
# supported)
|
||||
spec.creationType = 'allFlash' if getattr(capacity_disks[0], 'ssd') \
|
||||
else 'hybrid'
|
||||
spec.host = host_ref
|
||||
try:
|
||||
task = vsan_disk_mgmt_system.InitializeDiskMappings(spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.fault.MethodNotFound as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError('Method \'{0}\' not found'.format(exc.method))
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
_wait_for_tasks([task], service_instance)
|
||||
return True
|
||||
|
||||
|
||||
def add_capacity_to_diskgroup(service_instance, vsan_disk_mgmt_system,
|
||||
host_ref, diskgroup, new_capacity_disks):
|
||||
'''
|
||||
Adds capacity disk(s) to a disk group.
|
||||
|
||||
service_instance
|
||||
Service instance to the host or vCenter
|
||||
|
||||
vsan_disk_mgmt_system
|
||||
vim.VimClusterVsanVcDiskManagemenetSystem representing the vSan disk
|
||||
management system retrieved from the vsan endpoint.
|
||||
|
||||
host_ref
|
||||
vim.HostSystem object representing the target host the disk group will
|
||||
be created on
|
||||
|
||||
diskgroup
|
||||
The vsan.HostDiskMapping object representing the host's diskgroup where
|
||||
the additional capacity needs to be added
|
||||
|
||||
new_capacity_disks
|
||||
List of vim.HostScsiDisk objects representing the disks to be added as
|
||||
capacity disks. Can be either ssd or non-ssd. There must be a minimum
|
||||
of 1 new capacity disk in the list.
|
||||
'''
|
||||
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
|
||||
cache_disk = diskgroup.ssd
|
||||
cache_disk_id = cache_disk.canonicalName
|
||||
log.debug('Adding capacity to disk group with cache disk \'{0}\' on host '
|
||||
'\'{1}\''.format(cache_disk_id, hostname))
|
||||
log.trace('new_capacity_disk_ids = {0}'.format([c.canonicalName for c in
|
||||
new_capacity_disks]))
|
||||
spec = vim.VimVsanHostDiskMappingCreationSpec()
|
||||
spec.cacheDisks = [cache_disk]
|
||||
spec.capacityDisks = new_capacity_disks
|
||||
# All new capacity disks must be either ssd or non-ssd (mixed disks are not
|
||||
# supported); also they need to match the type of the existing capacity
|
||||
# disks; we assume disks are already validated
|
||||
spec.creationType = 'allFlash' if getattr(new_capacity_disks[0], 'ssd') \
|
||||
else 'hybrid'
|
||||
spec.host = host_ref
|
||||
try:
|
||||
task = vsan_disk_mgmt_system.InitializeDiskMappings(spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.fault.MethodNotFound as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError('Method \'{0}\' not found'.format(exc.method))
|
||||
except vmodl.RuntimeFault as exc:
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
_wait_for_tasks([task], service_instance)
|
||||
return True
|
||||
|
||||
|
||||
def remove_capacity_from_diskgroup(service_instance, host_ref, diskgroup,
|
||||
capacity_disks, data_evacuation=True,
|
||||
hostname=None,
|
||||
host_vsan_system=None):
|
||||
'''
|
||||
Removes capacity disk(s) from a disk group.
|
||||
|
||||
service_instance
|
||||
Service instance to the host or vCenter
|
||||
|
||||
host_vsan_system
|
||||
ESXi host's VSAN system
|
||||
|
||||
host_ref
|
||||
Reference to the ESXi host
|
||||
|
||||
diskgroup
|
||||
The vsan.HostDiskMapping object representing the host's diskgroup from
|
||||
where the capacity needs to be removed
|
||||
|
||||
capacity_disks
|
||||
List of vim.HostScsiDisk objects representing the capacity disks to be
|
||||
removed. Can be either ssd or non-ssd. There must be a minimum
|
||||
of 1 capacity disk in the list.
|
||||
|
||||
data_evacuation
|
||||
Specifies whether to gracefully evacuate the data on the capacity disks
|
||||
before removing them from the disk group. Default value is True.
|
||||
|
||||
hostname
|
||||
Name of ESXi host. Default value is None.
|
||||
|
||||
host_vsan_system
|
||||
ESXi host's VSAN system. Default value is None.
|
||||
'''
|
||||
if not hostname:
|
||||
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
|
||||
cache_disk = diskgroup.ssd
|
||||
cache_disk_id = cache_disk.canonicalName
|
||||
log.debug('Removing capacity from disk group with cache disk \'{0}\' on '
|
||||
'host \'{1}\''.format(cache_disk_id, hostname))
|
||||
log.trace('capacity_disk_ids = {0}'.format([c.canonicalName for c in
|
||||
capacity_disks]))
|
||||
if not host_vsan_system:
|
||||
host_vsan_system = get_host_vsan_system(service_instance,
|
||||
host_ref, hostname)
|
||||
# Set to evacuate all data before removing the disks
|
||||
maint_spec = vim.HostMaintenanceSpec()
|
||||
maint_spec.vsanMode = vim.VsanHostDecommissionMode()
|
||||
if data_evacuation:
|
||||
maint_spec.vsanMode.objectAction = \
|
||||
vim.VsanHostDecommissionModeObjectAction.evacuateAllData
|
||||
else:
|
||||
maint_spec.vsanMode.objectAction = \
|
||||
vim.VsanHostDecommissionModeObjectAction.noAction
|
||||
try:
|
||||
task = host_vsan_system.RemoveDisk_Task(disk=capacity_disks,
|
||||
maintenanceSpec=maint_spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
salt.utils.vmware.wait_for_task(task, hostname, 'remove_capacity')
|
||||
return True
|
||||
|
||||
|
||||
def remove_diskgroup(service_instance, host_ref, diskgroup, hostname=None,
|
||||
host_vsan_system=None, erase_disk_partitions=False,
|
||||
data_accessibility=True):
|
||||
'''
|
||||
Removes a disk group.
|
||||
|
||||
service_instance
|
||||
Service instance to the host or vCenter
|
||||
|
||||
host_ref
|
||||
Reference to the ESXi host
|
||||
|
||||
diskgroup
|
||||
The vsan.HostDiskMapping object representing the host's diskgroup from
|
||||
where the capacity needs to be removed
|
||||
|
||||
hostname
|
||||
Name of ESXi host. Default value is None.
|
||||
|
||||
host_vsan_system
|
||||
ESXi host's VSAN system. Default value is None.
|
||||
|
||||
data_accessibility
|
||||
Specifies whether to ensure data accessibility. Default value is True.
|
||||
'''
|
||||
if not hostname:
|
||||
hostname = salt.utils.vmware.get_managed_object_name(host_ref)
|
||||
cache_disk_id = diskgroup.ssd.canonicalName
|
||||
log.debug('Removing disk group with cache disk \'{0}\' on '
|
||||
'host \'{1}\''.format(cache_disk_id, hostname))
|
||||
if not host_vsan_system:
|
||||
host_vsan_system = get_host_vsan_system(
|
||||
service_instance, host_ref, hostname)
|
||||
# Set to evacuate all data before removing the disks
|
||||
maint_spec = vim.HostMaintenanceSpec()
|
||||
maint_spec.vsanMode = vim.VsanHostDecommissionMode()
|
||||
object_action = vim.VsanHostDecommissionModeObjectAction
|
||||
if data_accessibility:
|
||||
maint_spec.vsanMode.objectAction = \
|
||||
object_action.ensureObjectAccessibility
|
||||
else:
|
||||
maint_spec.vsanMode.objectAction = object_action.noAction
|
||||
try:
|
||||
task = host_vsan_system.RemoveDiskMapping_Task(
|
||||
mapping=[diskgroup], maintenanceSpec=maint_spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError('Not enough permissions. Required privilege: '
|
||||
'{0}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise VMwareRuntimeError(exc.msg)
|
||||
salt.utils.vmware.wait_for_task(task, hostname, 'remove_diskgroup')
|
||||
log.debug('Removed disk group with cache disk \'{0}\' '
|
||||
'on host \'{1}\''.format(cache_disk_id, hostname))
|
||||
return True
|
||||
|
||||
|
||||
def get_cluster_vsan_info(cluster_ref):
|
||||
'''
|
||||
Returns the extended cluster vsan configuration object
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
classes:
|
||||
- app.ssh.server
|
||||
|
||||
pillars:
|
||||
sshd:
|
||||
root_access: yes
|
|
@ -0,0 +1,4 @@
|
|||
pillars:
|
||||
sshd:
|
||||
root_access: no
|
||||
ssh_port: 22
|
|
@ -0,0 +1,17 @@
|
|||
classes:
|
||||
- default.users
|
||||
- default.motd
|
||||
|
||||
states:
|
||||
- openssh
|
||||
|
||||
pillars:
|
||||
default:
|
||||
network:
|
||||
dns:
|
||||
srv1: 192.168.0.1
|
||||
srv2: 192.168.0.2
|
||||
domain: example.com
|
||||
ntp:
|
||||
srv1: 192.168.10.10
|
||||
srv2: 192.168.10.20
|
|
@ -0,0 +1,3 @@
|
|||
pillars:
|
||||
motd:
|
||||
text: "Welcome to {{ __grains__['id'] }} system located in ${default:network:sub}"
|
|
@ -0,0 +1,16 @@
|
|||
states:
|
||||
- user_mgt
|
||||
|
||||
pillars:
|
||||
default:
|
||||
users:
|
||||
adm1:
|
||||
uid: 1201
|
||||
gid: 1201
|
||||
gecos: 'Super user admin1'
|
||||
homedir: /home/adm1
|
||||
adm2:
|
||||
uid: 1202
|
||||
gid: 1202
|
||||
gecos: 'Super user admin2'
|
||||
homedir: /home/adm2
|
|
@ -0,0 +1,21 @@
|
|||
states:
|
||||
- app
|
||||
|
||||
pillars:
|
||||
app:
|
||||
config:
|
||||
dns:
|
||||
srv1: ${default:network:dns:srv1}
|
||||
srv2: ${default:network:dns:srv2}
|
||||
uri: https://application.domain/call?\${test}
|
||||
prod_parameters:
|
||||
- p1
|
||||
- p2
|
||||
- p3
|
||||
pkg:
|
||||
- app-core
|
||||
- app-backend
|
||||
# Safe minion_id matching
|
||||
{% if minion_id == 'zrh.node3' %}
|
||||
safe_pillar: '_only_ zrh.node3 will see this pillar and this cannot be overriden like grains'
|
||||
{% endif %}
|
|
@ -0,0 +1,7 @@
|
|||
states:
|
||||
- nginx_deployment
|
||||
|
||||
pillars:
|
||||
nginx:
|
||||
pkg:
|
||||
- nginx
|
|
@ -0,0 +1,7 @@
|
|||
classes:
|
||||
- roles.nginx
|
||||
|
||||
pillars:
|
||||
nginx:
|
||||
pkg:
|
||||
- nginx-module
|
|
@ -0,0 +1,20 @@
|
|||
pillars:
|
||||
default:
|
||||
network:
|
||||
sub: Geneva
|
||||
dns:
|
||||
srv1: 10.20.0.1
|
||||
srv2: 10.20.0.2
|
||||
srv3: 192.168.1.1
|
||||
domain: gnv.example.com
|
||||
users:
|
||||
adm1:
|
||||
uid: 1210
|
||||
gid: 1210
|
||||
gecos: 'Super user admin1'
|
||||
homedir: /srv/app/adm1
|
||||
adm3:
|
||||
uid: 1203
|
||||
gid: 1203
|
||||
gecos: 'Super user admin3'
|
||||
homedir: /home/adm3
|
|
@ -0,0 +1,17 @@
|
|||
classes:
|
||||
- app.ssh.server
|
||||
- roles.nginx.server
|
||||
|
||||
pillars:
|
||||
default:
|
||||
network:
|
||||
sub: Lausanne
|
||||
dns:
|
||||
srv1: 10.10.0.1
|
||||
domain: qls.example.com
|
||||
users:
|
||||
nginx_adm:
|
||||
uid: 250
|
||||
gid: 200
|
||||
gecos: 'Nginx admin user'
|
||||
homedir: /srv/www
|
|
@ -0,0 +1,24 @@
|
|||
classes:
|
||||
- roles.app
|
||||
# This should validate that we process a class only once
|
||||
- app.borgbackup
|
||||
# As this one should not be processed
|
||||
# and would override in turn overrides from app.borgbackup
|
||||
- app.ssh.server
|
||||
|
||||
pillars:
|
||||
default:
|
||||
network:
|
||||
sub: Zurich
|
||||
dns:
|
||||
srv1: 10.30.0.1
|
||||
srv2: 10.30.0.2
|
||||
domain: zrh.example.com
|
||||
ntp:
|
||||
srv1: 10.0.0.127
|
||||
users:
|
||||
adm1:
|
||||
uid: 250
|
||||
gid: 250
|
||||
gecos: 'Super user admin1'
|
||||
homedir: /srv/app/1
|
|
@ -0,0 +1,6 @@
|
|||
environment: base
|
||||
|
||||
classes:
|
||||
{% for class in ['default'] %}
|
||||
- {{ class }}
|
||||
{% endfor %}
|
|
@ -639,6 +639,14 @@ class _GetProxyConnectionDetailsTestCase(TestCase, LoaderModuleMockMixin):
|
|||
'mechanism': 'fake_mechanism',
|
||||
'principal': 'fake_principal',
|
||||
'domain': 'fake_domain'}
|
||||
self.vcenter_details = {'vcenter': 'fake_vcenter',
|
||||
'username': 'fake_username',
|
||||
'password': 'fake_password',
|
||||
'protocol': 'fake_protocol',
|
||||
'port': 'fake_port',
|
||||
'mechanism': 'fake_mechanism',
|
||||
'principal': 'fake_principal',
|
||||
'domain': 'fake_domain'}
|
||||
|
||||
def tearDown(self):
|
||||
for attrname in ('esxi_host_details', 'esxi_vcenter_details',
|
||||
|
@ -693,6 +701,17 @@ class _GetProxyConnectionDetailsTestCase(TestCase, LoaderModuleMockMixin):
|
|||
'fake_protocol', 'fake_port', 'fake_mechanism',
|
||||
'fake_principal', 'fake_domain'), ret)
|
||||
|
||||
def test_vcenter_proxy_details(self):
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value='vcenter')):
|
||||
with patch.dict(vsphere.__salt__,
|
||||
{'vcenter.get_details': MagicMock(
|
||||
return_value=self.vcenter_details)}):
|
||||
ret = vsphere._get_proxy_connection_details()
|
||||
self.assertEqual(('fake_vcenter', 'fake_username', 'fake_password',
|
||||
'fake_protocol', 'fake_port', 'fake_mechanism',
|
||||
'fake_principal', 'fake_domain'), ret)
|
||||
|
||||
def test_unsupported_proxy_details(self):
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value='unsupported')):
|
||||
|
@ -890,7 +909,7 @@ class GetServiceInstanceViaProxyTestCase(TestCase, LoaderModuleMockMixin):
|
|||
}
|
||||
|
||||
def test_supported_proxies(self):
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter']
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter']
|
||||
for proxy_type in supported_proxies:
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value=proxy_type)):
|
||||
|
@ -933,7 +952,7 @@ class DisconnectTestCase(TestCase, LoaderModuleMockMixin):
|
|||
}
|
||||
|
||||
def test_supported_proxies(self):
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter']
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter']
|
||||
for proxy_type in supported_proxies:
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value=proxy_type)):
|
||||
|
@ -974,7 +993,7 @@ class TestVcenterConnectionTestCase(TestCase, LoaderModuleMockMixin):
|
|||
}
|
||||
|
||||
def test_supported_proxies(self):
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter']
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter']
|
||||
for proxy_type in supported_proxies:
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value=proxy_type)):
|
||||
|
@ -1049,7 +1068,7 @@ class ListDatacentersViaProxyTestCase(TestCase, LoaderModuleMockMixin):
|
|||
}
|
||||
|
||||
def test_supported_proxies(self):
|
||||
supported_proxies = ['esxcluster', 'esxdatacenter']
|
||||
supported_proxies = ['esxcluster', 'esxdatacenter', 'vcenter']
|
||||
for proxy_type in supported_proxies:
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value=proxy_type)):
|
||||
|
@ -1127,7 +1146,7 @@ class CreateDatacenterTestCase(TestCase, LoaderModuleMockMixin):
|
|||
}
|
||||
|
||||
def test_supported_proxies(self):
|
||||
supported_proxies = ['esxdatacenter']
|
||||
supported_proxies = ['esxdatacenter', 'vcenter']
|
||||
for proxy_type in supported_proxies:
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value=proxy_type)):
|
||||
|
@ -1339,12 +1358,15 @@ class _GetProxyTargetTestCase(TestCase, LoaderModuleMockMixin):
|
|||
def setUp(self):
|
||||
attrs = (('mock_si', MagicMock()),
|
||||
('mock_dc', MagicMock()),
|
||||
('mock_cl', MagicMock()))
|
||||
('mock_cl', MagicMock()),
|
||||
('mock_root', MagicMock()))
|
||||
for attr, mock_obj in attrs:
|
||||
setattr(self, attr, mock_obj)
|
||||
self.addCleanup(delattr, self, attr)
|
||||
attrs = (('mock_get_datacenter', MagicMock(return_value=self.mock_dc)),
|
||||
('mock_get_cluster', MagicMock(return_value=self.mock_cl)))
|
||||
('mock_get_cluster', MagicMock(return_value=self.mock_cl)),
|
||||
('mock_get_root_folder',
|
||||
MagicMock(return_value=self.mock_root)))
|
||||
for attr, mock_obj in attrs:
|
||||
setattr(self, attr, mock_obj)
|
||||
self.addCleanup(delattr, self, attr)
|
||||
|
@ -1360,7 +1382,8 @@ class _GetProxyTargetTestCase(TestCase, LoaderModuleMockMixin):
|
|||
MagicMock(return_value=(None, None, None, None, None, None, None,
|
||||
None, 'datacenter'))),
|
||||
('salt.utils.vmware.get_datacenter', self.mock_get_datacenter),
|
||||
('salt.utils.vmware.get_cluster', self.mock_get_cluster))
|
||||
('salt.utils.vmware.get_cluster', self.mock_get_cluster),
|
||||
('salt.utils.vmware.get_root_folder', self.mock_get_root_folder))
|
||||
for module, mock_obj in patches:
|
||||
patcher = patch(module, mock_obj)
|
||||
patcher.start()
|
||||
|
@ -1409,3 +1432,10 @@ class _GetProxyTargetTestCase(TestCase, LoaderModuleMockMixin):
|
|||
MagicMock(return_value='esxdatacenter')):
|
||||
ret = vsphere._get_proxy_target(self.mock_si)
|
||||
self.assertEqual(ret, self.mock_dc)
|
||||
|
||||
def test_vcenter_proxy_return(self):
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value='vcenter')):
|
||||
ret = vsphere._get_proxy_target(self.mock_si)
|
||||
self.mock_get_root_folder.assert_called_once_with(self.mock_si)
|
||||
self.assertEqual(ret, self.mock_root)
|
||||
|
|
43
tests/unit/pillar/test_saltclass.py
Normal file
43
tests/unit/pillar/test_saltclass.py
Normal file
|
@ -0,0 +1,43 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Import python libs
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
|
||||
# Import Salt Testing libs
|
||||
from tests.support.mixins import LoaderModuleMockMixin
|
||||
from tests.support.unit import TestCase, skipIf
|
||||
from tests.support.mock import NO_MOCK, NO_MOCK_REASON
|
||||
|
||||
# Import Salt Libs
|
||||
import salt.pillar.saltclass as saltclass
|
||||
|
||||
|
||||
base_path = os.path.dirname(os.path.realpath(__file__))
|
||||
fake_minion_id = 'fake_id'
|
||||
fake_pillar = {}
|
||||
fake_args = ({'path': '{0}/../../integration/files/saltclass/examples'.format(base_path)})
|
||||
fake_opts = {}
|
||||
fake_salt = {}
|
||||
fake_grains = {}
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
class SaltclassPillarTestCase(TestCase, LoaderModuleMockMixin):
|
||||
'''
|
||||
Tests for salt.pillar.saltclass
|
||||
'''
|
||||
def setup_loader_modules(self):
|
||||
return {saltclass: {'__opts__': fake_opts,
|
||||
'__salt__': fake_salt,
|
||||
'__grains__': fake_grains
|
||||
}}
|
||||
|
||||
def _runner(self, expected_ret):
|
||||
full_ret = saltclass.ext_pillar(fake_minion_id, fake_pillar, fake_args)
|
||||
parsed_ret = full_ret['__saltclass__']['classes']
|
||||
self.assertListEqual(parsed_ret, expected_ret)
|
||||
|
||||
def test_succeeds(self):
|
||||
ret = ['default.users', 'default.motd', 'default']
|
||||
self._runner(ret)
|
|
@ -18,6 +18,7 @@ import salt.serializers.yaml as yaml
|
|||
import salt.serializers.yamlex as yamlex
|
||||
import salt.serializers.msgpack as msgpack
|
||||
import salt.serializers.python as python
|
||||
from salt.serializers.yaml import EncryptedString
|
||||
from salt.serializers import SerializationError
|
||||
from salt.utils.odict import OrderedDict
|
||||
|
||||
|
@ -43,10 +44,11 @@ class TestSerializers(TestCase):
|
|||
@skipIf(not yaml.available, SKIP_MESSAGE % 'yaml')
|
||||
def test_serialize_yaml(self):
|
||||
data = {
|
||||
"foo": "bar"
|
||||
"foo": "bar",
|
||||
"encrypted_data": EncryptedString("foo")
|
||||
}
|
||||
serialized = yaml.serialize(data)
|
||||
assert serialized == '{foo: bar}', serialized
|
||||
assert serialized == '{encrypted_data: !encrypted foo, foo: bar}', serialized
|
||||
|
||||
deserialized = yaml.deserialize(serialized)
|
||||
assert deserialized == data, deserialized
|
||||
|
|
664
tests/unit/utils/test_pbm.py
Normal file
664
tests/unit/utils/test_pbm.py
Normal file
|
@ -0,0 +1,664 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
:codeauthor: :email:`Alexandru Bleotu <alexandru.bleotu@morganstanley.com>`
|
||||
|
||||
Tests functions in salt.utils.vsan
|
||||
'''
|
||||
|
||||
# Import python libraries
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
|
||||
# Import Salt testing libraries
|
||||
from tests.support.unit import TestCase, skipIf
|
||||
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, MagicMock, \
|
||||
PropertyMock
|
||||
|
||||
# Import Salt libraries
|
||||
from salt.exceptions import VMwareApiError, VMwareRuntimeError, \
|
||||
VMwareObjectRetrievalError
|
||||
from salt.ext.six.moves import range
|
||||
import salt.utils.pbm
|
||||
|
||||
try:
|
||||
from pyVmomi import vim, vmodl, pbm
|
||||
HAS_PYVMOMI = True
|
||||
except ImportError:
|
||||
HAS_PYVMOMI = False
|
||||
|
||||
|
||||
# Get Logging Started
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class GetProfileManagerTestCase(TestCase):
|
||||
'''Tests for salt.utils.pbm.get_profile_manager'''
|
||||
def setUp(self):
|
||||
self.mock_si = MagicMock()
|
||||
self.mock_stub = MagicMock()
|
||||
self.mock_prof_mgr = MagicMock()
|
||||
self.mock_content = MagicMock()
|
||||
self.mock_pbm_si = MagicMock(
|
||||
RetrieveContent=MagicMock(return_value=self.mock_content))
|
||||
type(self.mock_content).profileManager = \
|
||||
PropertyMock(return_value=self.mock_prof_mgr)
|
||||
patches = (
|
||||
('salt.utils.vmware.get_new_service_instance_stub',
|
||||
MagicMock(return_value=self.mock_stub)),
|
||||
('salt.utils.pbm.pbm.ServiceInstance',
|
||||
MagicMock(return_value=self.mock_pbm_si)))
|
||||
for mod, mock in patches:
|
||||
patcher = patch(mod, mock)
|
||||
patcher.start()
|
||||
self.addCleanup(patcher.stop)
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('mock_si', 'mock_stub', 'mock_content',
|
||||
'mock_pbm_si', 'mock_prof_mgr'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_get_new_service_stub(self):
|
||||
mock_get_new_service_stub = MagicMock()
|
||||
with patch('salt.utils.vmware.get_new_service_instance_stub',
|
||||
mock_get_new_service_stub):
|
||||
salt.utils.pbm.get_profile_manager(self.mock_si)
|
||||
mock_get_new_service_stub.assert_called_once_with(
|
||||
self.mock_si, ns='pbm/2.0', path='/pbm/sdk')
|
||||
|
||||
def test_pbm_si(self):
|
||||
mock_get_pbm_si = MagicMock()
|
||||
with patch('salt.utils.pbm.pbm.ServiceInstance',
|
||||
mock_get_pbm_si):
|
||||
salt.utils.pbm.get_profile_manager(self.mock_si)
|
||||
mock_get_pbm_si.assert_called_once_with('ServiceInstance',
|
||||
self.mock_stub)
|
||||
|
||||
def test_return_profile_manager(self):
|
||||
ret = salt.utils.pbm.get_profile_manager(self.mock_si)
|
||||
self.assertEqual(ret, self.mock_prof_mgr)
|
||||
|
||||
def test_profile_manager_raises_no_permissions(self):
|
||||
exc = vim.fault.NoPermission()
|
||||
exc.privilegeId = 'Fake privilege'
|
||||
type(self.mock_content).profileManager = PropertyMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_profile_manager(self.mock_si)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Not enough permissions. Required privilege: '
|
||||
'Fake privilege')
|
||||
|
||||
def test_profile_manager_raises_vim_fault(self):
|
||||
exc = vim.fault.VimFault()
|
||||
exc.msg = 'VimFault msg'
|
||||
type(self.mock_content).profileManager = PropertyMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_profile_manager(self.mock_si)
|
||||
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
|
||||
|
||||
def test_profile_manager_raises_runtime_fault(self):
|
||||
exc = vmodl.RuntimeFault()
|
||||
exc.msg = 'RuntimeFault msg'
|
||||
type(self.mock_content).profileManager = PropertyMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareRuntimeError) as excinfo:
|
||||
salt.utils.pbm.get_profile_manager(self.mock_si)
|
||||
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class GetPlacementSolverTestCase(TestCase):
|
||||
'''Tests for salt.utils.pbm.get_placement_solver'''
|
||||
def setUp(self):
|
||||
self.mock_si = MagicMock()
|
||||
self.mock_stub = MagicMock()
|
||||
self.mock_prof_mgr = MagicMock()
|
||||
self.mock_content = MagicMock()
|
||||
self.mock_pbm_si = MagicMock(
|
||||
RetrieveContent=MagicMock(return_value=self.mock_content))
|
||||
type(self.mock_content).placementSolver = \
|
||||
PropertyMock(return_value=self.mock_prof_mgr)
|
||||
patches = (
|
||||
('salt.utils.vmware.get_new_service_instance_stub',
|
||||
MagicMock(return_value=self.mock_stub)),
|
||||
('salt.utils.pbm.pbm.ServiceInstance',
|
||||
MagicMock(return_value=self.mock_pbm_si)))
|
||||
for mod, mock in patches:
|
||||
patcher = patch(mod, mock)
|
||||
patcher.start()
|
||||
self.addCleanup(patcher.stop)
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('mock_si', 'mock_stub', 'mock_content',
|
||||
'mock_pbm_si', 'mock_prof_mgr'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_get_new_service_stub(self):
|
||||
mock_get_new_service_stub = MagicMock()
|
||||
with patch('salt.utils.vmware.get_new_service_instance_stub',
|
||||
mock_get_new_service_stub):
|
||||
salt.utils.pbm.get_placement_solver(self.mock_si)
|
||||
mock_get_new_service_stub.assert_called_once_with(
|
||||
self.mock_si, ns='pbm/2.0', path='/pbm/sdk')
|
||||
|
||||
def test_pbm_si(self):
|
||||
mock_get_pbm_si = MagicMock()
|
||||
with patch('salt.utils.pbm.pbm.ServiceInstance',
|
||||
mock_get_pbm_si):
|
||||
salt.utils.pbm.get_placement_solver(self.mock_si)
|
||||
mock_get_pbm_si.assert_called_once_with('ServiceInstance',
|
||||
self.mock_stub)
|
||||
|
||||
def test_return_profile_manager(self):
|
||||
ret = salt.utils.pbm.get_placement_solver(self.mock_si)
|
||||
self.assertEqual(ret, self.mock_prof_mgr)
|
||||
|
||||
def test_placement_solver_raises_no_permissions(self):
|
||||
exc = vim.fault.NoPermission()
|
||||
exc.privilegeId = 'Fake privilege'
|
||||
type(self.mock_content).placementSolver = PropertyMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_placement_solver(self.mock_si)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Not enough permissions. Required privilege: '
|
||||
'Fake privilege')
|
||||
|
||||
def test_placement_solver_raises_vim_fault(self):
|
||||
exc = vim.fault.VimFault()
|
||||
exc.msg = 'VimFault msg'
|
||||
type(self.mock_content).placementSolver = PropertyMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_placement_solver(self.mock_si)
|
||||
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
|
||||
|
||||
def test_placement_solver_raises_runtime_fault(self):
|
||||
exc = vmodl.RuntimeFault()
|
||||
exc.msg = 'RuntimeFault msg'
|
||||
type(self.mock_content).placementSolver = PropertyMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareRuntimeError) as excinfo:
|
||||
salt.utils.pbm.get_placement_solver(self.mock_si)
|
||||
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class GetCapabilityDefinitionsTestCase(TestCase):
|
||||
'''Tests for salt.utils.pbm.get_capability_definitions'''
|
||||
def setUp(self):
|
||||
self.mock_res_type = MagicMock()
|
||||
self.mock_cap_cats = [MagicMock(capabilityMetadata=['fake_cap_meta1',
|
||||
'fake_cap_meta2']),
|
||||
MagicMock(capabilityMetadata=['fake_cap_meta3'])]
|
||||
self.mock_prof_mgr = MagicMock(
|
||||
FetchCapabilityMetadata=MagicMock(return_value=self.mock_cap_cats))
|
||||
patches = (
|
||||
('salt.utils.pbm.pbm.profile.ResourceType',
|
||||
MagicMock(return_value=self.mock_res_type)),)
|
||||
for mod, mock in patches:
|
||||
patcher = patch(mod, mock)
|
||||
patcher.start()
|
||||
self.addCleanup(patcher.stop)
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('mock_res_type', 'mock_cap_cats', 'mock_prof_mgr'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_get_res_type(self):
|
||||
mock_get_res_type = MagicMock()
|
||||
with patch('salt.utils.pbm.pbm.profile.ResourceType',
|
||||
mock_get_res_type):
|
||||
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
|
||||
mock_get_res_type.assert_called_once_with(
|
||||
resourceType=pbm.profile.ResourceTypeEnum.STORAGE)
|
||||
|
||||
def test_fetch_capabilities(self):
|
||||
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
|
||||
self.mock_prof_mgr.FetchCapabilityMetadata.assert_called_once_with(
|
||||
self.mock_res_type)
|
||||
|
||||
def test_fetch_capabilities_raises_no_permissions(self):
|
||||
exc = vim.fault.NoPermission()
|
||||
exc.privilegeId = 'Fake privilege'
|
||||
self.mock_prof_mgr.FetchCapabilityMetadata = \
|
||||
MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Not enough permissions. Required privilege: '
|
||||
'Fake privilege')
|
||||
|
||||
def test_fetch_capabilities_raises_vim_fault(self):
|
||||
exc = vim.fault.VimFault()
|
||||
exc.msg = 'VimFault msg'
|
||||
self.mock_prof_mgr.FetchCapabilityMetadata = \
|
||||
MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
|
||||
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
|
||||
|
||||
def test_fetch_capabilities_raises_runtime_fault(self):
|
||||
exc = vmodl.RuntimeFault()
|
||||
exc.msg = 'RuntimeFault msg'
|
||||
self.mock_prof_mgr.FetchCapabilityMetadata = \
|
||||
MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareRuntimeError) as excinfo:
|
||||
salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
|
||||
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
|
||||
|
||||
def test_return_cap_definitions(self):
|
||||
ret = salt.utils.pbm.get_capability_definitions(self.mock_prof_mgr)
|
||||
self.assertEqual(ret, ['fake_cap_meta1', 'fake_cap_meta2',
|
||||
'fake_cap_meta3'])
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class GetPoliciesByIdTestCase(TestCase):
|
||||
'''Tests for salt.utils.pbm.get_policies_by_id'''
|
||||
def setUp(self):
|
||||
self.policy_ids = MagicMock()
|
||||
self.mock_policies = MagicMock()
|
||||
self.mock_prof_mgr = MagicMock(
|
||||
RetrieveContent=MagicMock(return_value=self.mock_policies))
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('policy_ids', 'mock_policies', 'mock_prof_mgr'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_retrieve_policies(self):
|
||||
salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
|
||||
self.mock_prof_mgr.RetrieveContent.assert_called_once_with(
|
||||
self.policy_ids)
|
||||
|
||||
def test_retrieve_policies_raises_no_permissions(self):
|
||||
exc = vim.fault.NoPermission()
|
||||
exc.privilegeId = 'Fake privilege'
|
||||
self.mock_prof_mgr.RetrieveContent = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Not enough permissions. Required privilege: '
|
||||
'Fake privilege')
|
||||
|
||||
def test_retrieve_policies_raises_vim_fault(self):
|
||||
exc = vim.fault.VimFault()
|
||||
exc.msg = 'VimFault msg'
|
||||
self.mock_prof_mgr.RetrieveContent = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
|
||||
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
|
||||
|
||||
def test_retrieve_policies_raises_runtime_fault(self):
|
||||
exc = vmodl.RuntimeFault()
|
||||
exc.msg = 'RuntimeFault msg'
|
||||
self.mock_prof_mgr.RetrieveContent = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareRuntimeError) as excinfo:
|
||||
salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
|
||||
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
|
||||
|
||||
def test_return_policies(self):
|
||||
ret = salt.utils.pbm.get_policies_by_id(self.mock_prof_mgr, self.policy_ids)
|
||||
self.assertEqual(ret, self.mock_policies)
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class GetStoragePoliciesTestCase(TestCase):
|
||||
'''Tests for salt.utils.pbm.get_storage_policies'''
|
||||
def setUp(self):
|
||||
self.mock_res_type = MagicMock()
|
||||
self.mock_policy_ids = MagicMock()
|
||||
self.mock_prof_mgr = MagicMock(
|
||||
QueryProfile=MagicMock(return_value=self.mock_policy_ids))
|
||||
# Policies
|
||||
self.mock_policies = []
|
||||
for i in range(4):
|
||||
mock_obj = MagicMock(resourceType=MagicMock(
|
||||
resourceType=pbm.profile.ResourceTypeEnum.STORAGE))
|
||||
mock_obj.name = 'fake_policy{0}'.format(i)
|
||||
self.mock_policies.append(mock_obj)
|
||||
patches = (
|
||||
('salt.utils.pbm.pbm.profile.ResourceType',
|
||||
MagicMock(return_value=self.mock_res_type)),
|
||||
('salt.utils.pbm.get_policies_by_id',
|
||||
MagicMock(return_value=self.mock_policies)))
|
||||
for mod, mock in patches:
|
||||
patcher = patch(mod, mock)
|
||||
patcher.start()
|
||||
self.addCleanup(patcher.stop)
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('mock_res_type', 'mock_policy_ids', 'mock_policies',
|
||||
'mock_prof_mgr'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_get_res_type(self):
|
||||
mock_get_res_type = MagicMock()
|
||||
with patch('salt.utils.pbm.pbm.profile.ResourceType',
|
||||
mock_get_res_type):
|
||||
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
|
||||
mock_get_res_type.assert_called_once_with(
|
||||
resourceType=pbm.profile.ResourceTypeEnum.STORAGE)
|
||||
|
||||
def test_retrieve_policy_ids(self):
|
||||
mock_retrieve_policy_ids = MagicMock(return_value=self.mock_policy_ids)
|
||||
self.mock_prof_mgr.QueryProfile = mock_retrieve_policy_ids
|
||||
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
|
||||
mock_retrieve_policy_ids.assert_called_once_with(self.mock_res_type)
|
||||
|
||||
def test_retrieve_policy_ids_raises_no_permissions(self):
|
||||
exc = vim.fault.NoPermission()
|
||||
exc.privilegeId = 'Fake privilege'
|
||||
self.mock_prof_mgr.QueryProfile = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Not enough permissions. Required privilege: '
|
||||
'Fake privilege')
|
||||
|
||||
def test_retrieve_policy_ids_raises_vim_fault(self):
|
||||
exc = vim.fault.VimFault()
|
||||
exc.msg = 'VimFault msg'
|
||||
self.mock_prof_mgr.QueryProfile = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
|
||||
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
|
||||
|
||||
def test_retrieve_policy_ids_raises_runtime_fault(self):
|
||||
exc = vmodl.RuntimeFault()
|
||||
exc.msg = 'RuntimeFault msg'
|
||||
self.mock_prof_mgr.QueryProfile = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareRuntimeError) as excinfo:
|
||||
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
|
||||
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
|
||||
|
||||
def test_get_policies_by_id(self):
|
||||
mock_get_policies_by_id = MagicMock(return_value=self.mock_policies)
|
||||
with patch('salt.utils.pbm.get_policies_by_id',
|
||||
mock_get_policies_by_id):
|
||||
salt.utils.pbm.get_storage_policies(self.mock_prof_mgr)
|
||||
mock_get_policies_by_id.assert_called_once_with(
|
||||
self.mock_prof_mgr, self.mock_policy_ids)
|
||||
|
||||
def test_return_all_policies(self):
|
||||
ret = salt.utils.pbm.get_storage_policies(self.mock_prof_mgr,
|
||||
get_all_policies=True)
|
||||
self.assertEqual(ret, self.mock_policies)
|
||||
|
||||
def test_return_filtered_policies(self):
|
||||
ret = salt.utils.pbm.get_storage_policies(
|
||||
self.mock_prof_mgr, policy_names=['fake_policy1', 'fake_policy3'])
|
||||
self.assertEqual(ret, [self.mock_policies[1], self.mock_policies[3]])
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class CreateStoragePolicyTestCase(TestCase):
|
||||
'''Tests for salt.utils.pbm.create_storage_policy'''
|
||||
def setUp(self):
|
||||
self.mock_policy_spec = MagicMock()
|
||||
self.mock_prof_mgr = MagicMock()
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('mock_policy_spec', 'mock_prof_mgr'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_create_policy(self):
|
||||
salt.utils.pbm.create_storage_policy(self.mock_prof_mgr,
|
||||
self.mock_policy_spec)
|
||||
self.mock_prof_mgr.Create.assert_called_once_with(
|
||||
self.mock_policy_spec)
|
||||
|
||||
def test_create_policy_raises_no_permissions(self):
|
||||
exc = vim.fault.NoPermission()
|
||||
exc.privilegeId = 'Fake privilege'
|
||||
self.mock_prof_mgr.Create = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.create_storage_policy(self.mock_prof_mgr,
|
||||
self.mock_policy_spec)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Not enough permissions. Required privilege: '
|
||||
'Fake privilege')
|
||||
|
||||
def test_create_policy_raises_vim_fault(self):
|
||||
exc = vim.fault.VimFault()
|
||||
exc.msg = 'VimFault msg'
|
||||
self.mock_prof_mgr.Create = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.create_storage_policy(self.mock_prof_mgr,
|
||||
self.mock_policy_spec)
|
||||
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
|
||||
|
||||
def test_create_policy_raises_runtime_fault(self):
|
||||
exc = vmodl.RuntimeFault()
|
||||
exc.msg = 'RuntimeFault msg'
|
||||
self.mock_prof_mgr.Create = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareRuntimeError) as excinfo:
|
||||
salt.utils.pbm.create_storage_policy(self.mock_prof_mgr,
|
||||
self.mock_policy_spec)
|
||||
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class UpdateStoragePolicyTestCase(TestCase):
|
||||
'''Tests for salt.utils.pbm.update_storage_policy'''
|
||||
def setUp(self):
|
||||
self.mock_policy_spec = MagicMock()
|
||||
self.mock_policy = MagicMock()
|
||||
self.mock_prof_mgr = MagicMock()
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('mock_policy_spec', 'mock_policy', 'mock_prof_mgr'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_create_policy(self):
|
||||
salt.utils.pbm.update_storage_policy(
|
||||
self.mock_prof_mgr, self.mock_policy, self.mock_policy_spec)
|
||||
self.mock_prof_mgr.Update.assert_called_once_with(
|
||||
self.mock_policy.profileId, self.mock_policy_spec)
|
||||
|
||||
def test_create_policy_raises_no_permissions(self):
|
||||
exc = vim.fault.NoPermission()
|
||||
exc.privilegeId = 'Fake privilege'
|
||||
self.mock_prof_mgr.Update = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.update_storage_policy(
|
||||
self.mock_prof_mgr, self.mock_policy, self.mock_policy_spec)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Not enough permissions. Required privilege: '
|
||||
'Fake privilege')
|
||||
|
||||
def test_create_policy_raises_vim_fault(self):
|
||||
exc = vim.fault.VimFault()
|
||||
exc.msg = 'VimFault msg'
|
||||
self.mock_prof_mgr.Update = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.update_storage_policy(
|
||||
self.mock_prof_mgr, self.mock_policy, self.mock_policy_spec)
|
||||
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
|
||||
|
||||
def test_create_policy_raises_runtime_fault(self):
|
||||
exc = vmodl.RuntimeFault()
|
||||
exc.msg = 'RuntimeFault msg'
|
||||
self.mock_prof_mgr.Update = MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareRuntimeError) as excinfo:
|
||||
salt.utils.pbm.update_storage_policy(
|
||||
self.mock_prof_mgr, self.mock_policy, self.mock_policy_spec)
|
||||
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class GetDefaultStoragePolicyOfDatastoreTestCase(TestCase):
|
||||
'''Tests for salt.utils.pbm.get_default_storage_policy_of_datastore'''
|
||||
def setUp(self):
|
||||
self.mock_ds = MagicMock(_moId='fake_ds_moid')
|
||||
self.mock_hub = MagicMock()
|
||||
self.mock_policy_id = 'fake_policy_id'
|
||||
self.mock_prof_mgr = MagicMock(
|
||||
QueryDefaultRequirementProfile=MagicMock(
|
||||
return_value=self.mock_policy_id))
|
||||
self.mock_policy_refs = [MagicMock()]
|
||||
patches = (
|
||||
('salt.utils.pbm.pbm.placement.PlacementHub',
|
||||
MagicMock(return_value=self.mock_hub)),
|
||||
('salt.utils.pbm.get_policies_by_id',
|
||||
MagicMock(return_value=self.mock_policy_refs)))
|
||||
for mod, mock in patches:
|
||||
patcher = patch(mod, mock)
|
||||
patcher.start()
|
||||
self.addCleanup(patcher.stop)
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('mock_ds', 'mock_hub', 'mock_policy_id', 'mock_prof_mgr',
|
||||
'mock_policy_refs'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_get_placement_hub(self):
|
||||
mock_get_placement_hub = MagicMock()
|
||||
with patch('salt.utils.pbm.pbm.placement.PlacementHub',
|
||||
mock_get_placement_hub):
|
||||
salt.utils.pbm.get_default_storage_policy_of_datastore(
|
||||
self.mock_prof_mgr, self.mock_ds)
|
||||
mock_get_placement_hub.assert_called_once_with(
|
||||
hubId='fake_ds_moid', hubType='Datastore')
|
||||
|
||||
def test_query_default_requirement_profile(self):
|
||||
mock_query_prof = MagicMock(return_value=self.mock_policy_id)
|
||||
self.mock_prof_mgr.QueryDefaultRequirementProfile = \
|
||||
mock_query_prof
|
||||
salt.utils.pbm.get_default_storage_policy_of_datastore(
|
||||
self.mock_prof_mgr, self.mock_ds)
|
||||
mock_query_prof.assert_called_once_with(self.mock_hub)
|
||||
|
||||
def test_query_default_requirement_profile_raises_no_permissions(self):
|
||||
exc = vim.fault.NoPermission()
|
||||
exc.privilegeId = 'Fake privilege'
|
||||
self.mock_prof_mgr.QueryDefaultRequirementProfile = \
|
||||
MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_default_storage_policy_of_datastore(
|
||||
self.mock_prof_mgr, self.mock_ds)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Not enough permissions. Required privilege: '
|
||||
'Fake privilege')
|
||||
|
||||
def test_query_default_requirement_profile_raises_vim_fault(self):
|
||||
exc = vim.fault.VimFault()
|
||||
exc.msg = 'VimFault msg'
|
||||
self.mock_prof_mgr.QueryDefaultRequirementProfile = \
|
||||
MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.get_default_storage_policy_of_datastore(
|
||||
self.mock_prof_mgr, self.mock_ds)
|
||||
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
|
||||
|
||||
def test_query_default_requirement_profile_raises_runtime_fault(self):
|
||||
exc = vmodl.RuntimeFault()
|
||||
exc.msg = 'RuntimeFault msg'
|
||||
self.mock_prof_mgr.QueryDefaultRequirementProfile = \
|
||||
MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareRuntimeError) as excinfo:
|
||||
salt.utils.pbm.get_default_storage_policy_of_datastore(
|
||||
self.mock_prof_mgr, self.mock_ds)
|
||||
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
|
||||
|
||||
def test_get_policies_by_id(self):
|
||||
mock_get_policies_by_id = MagicMock()
|
||||
with patch('salt.utils.pbm.get_policies_by_id',
|
||||
mock_get_policies_by_id):
|
||||
salt.utils.pbm.get_default_storage_policy_of_datastore(
|
||||
self.mock_prof_mgr, self.mock_ds)
|
||||
mock_get_policies_by_id.assert_called_once_with(
|
||||
self.mock_prof_mgr, [self.mock_policy_id])
|
||||
|
||||
def test_no_policy_refs(self):
|
||||
mock_get_policies_by_id = MagicMock()
|
||||
with patch('salt.utils.pbm.get_policies_by_id',
|
||||
MagicMock(return_value=None)):
|
||||
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
|
||||
salt.utils.pbm.get_default_storage_policy_of_datastore(
|
||||
self.mock_prof_mgr, self.mock_ds)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Storage policy with id \'fake_policy_id\' was not '
|
||||
'found')
|
||||
|
||||
def test_return_policy_ref(self):
|
||||
mock_get_policies_by_id = MagicMock()
|
||||
ret = salt.utils.pbm.get_default_storage_policy_of_datastore(
|
||||
self.mock_prof_mgr, self.mock_ds)
|
||||
self.assertEqual(ret, self.mock_policy_refs[0])
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class AssignDefaultStoragePolicyToDatastoreTestCase(TestCase):
|
||||
'''Tests for salt.utils.pbm.assign_default_storage_policy_to_datastore'''
|
||||
def setUp(self):
|
||||
self.mock_ds = MagicMock(_moId='fake_ds_moid')
|
||||
self.mock_policy = MagicMock()
|
||||
self.mock_hub = MagicMock()
|
||||
self.mock_prof_mgr = MagicMock()
|
||||
patches = (
|
||||
('salt.utils.pbm.pbm.placement.PlacementHub',
|
||||
MagicMock(return_value=self.mock_hub)),)
|
||||
for mod, mock in patches:
|
||||
patcher = patch(mod, mock)
|
||||
patcher.start()
|
||||
self.addCleanup(patcher.stop)
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('mock_ds', 'mock_hub', 'mock_policy', 'mock_prof_mgr'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_get_placement_hub(self):
|
||||
mock_get_placement_hub = MagicMock()
|
||||
with patch('salt.utils.pbm.pbm.placement.PlacementHub',
|
||||
mock_get_placement_hub):
|
||||
salt.utils.pbm.assign_default_storage_policy_to_datastore(
|
||||
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
|
||||
mock_get_placement_hub.assert_called_once_with(
|
||||
hubId='fake_ds_moid', hubType='Datastore')
|
||||
|
||||
def test_assign_default_requirement_profile(self):
|
||||
mock_assign_prof = MagicMock()
|
||||
self.mock_prof_mgr.AssignDefaultRequirementProfile = \
|
||||
mock_assign_prof
|
||||
salt.utils.pbm.assign_default_storage_policy_to_datastore(
|
||||
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
|
||||
mock_assign_prof.assert_called_once_with(
|
||||
self.mock_policy.profileId, [self.mock_hub])
|
||||
|
||||
def test_assign_default_requirement_profile_raises_no_permissions(self):
|
||||
exc = vim.fault.NoPermission()
|
||||
exc.privilegeId = 'Fake privilege'
|
||||
self.mock_prof_mgr.AssignDefaultRequirementProfile = \
|
||||
MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.assign_default_storage_policy_to_datastore(
|
||||
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Not enough permissions. Required privilege: '
|
||||
'Fake privilege')
|
||||
|
||||
def test_assign_default_requirement_profile_raises_vim_fault(self):
|
||||
exc = vim.fault.VimFault()
|
||||
exc.msg = 'VimFault msg'
|
||||
self.mock_prof_mgr.AssignDefaultRequirementProfile = \
|
||||
MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareApiError) as excinfo:
|
||||
salt.utils.pbm.assign_default_storage_policy_to_datastore(
|
||||
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
|
||||
self.assertEqual(excinfo.exception.strerror, 'VimFault msg')
|
||||
|
||||
def test_assign_default_requirement_profile_raises_runtime_fault(self):
|
||||
exc = vmodl.RuntimeFault()
|
||||
exc.msg = 'RuntimeFault msg'
|
||||
self.mock_prof_mgr.AssignDefaultRequirementProfile = \
|
||||
MagicMock(side_effect=exc)
|
||||
with self.assertRaises(VMwareRuntimeError) as excinfo:
|
||||
salt.utils.pbm.assign_default_storage_policy_to_datastore(
|
||||
self.mock_prof_mgr, self.mock_policy, self.mock_ds)
|
||||
self.assertEqual(excinfo.exception.strerror, 'RuntimeFault msg')
|
|
@ -13,6 +13,7 @@ import ssl
|
|||
import sys
|
||||
|
||||
# Import Salt testing libraries
|
||||
from tests.support.mixins import LoaderModuleMockMixin
|
||||
from tests.support.unit import TestCase, skipIf
|
||||
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, MagicMock, call, \
|
||||
PropertyMock
|
||||
|
@ -852,6 +853,96 @@ class IsConnectionToAVCenterTestCase(TestCase):
|
|||
excinfo.exception.strerror)
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class GetNewServiceInstanceStub(TestCase, LoaderModuleMockMixin):
|
||||
'''Tests for salt.utils.vmware.get_new_service_instance_stub'''
|
||||
def setup_loader_modules(self):
|
||||
return {salt.utils.vmware: {
|
||||
'__virtual__': MagicMock(return_value='vmware'),
|
||||
'sys': MagicMock(),
|
||||
'ssl': MagicMock()}}
|
||||
|
||||
def setUp(self):
|
||||
self.mock_stub = MagicMock(
|
||||
host='fake_host:1000',
|
||||
cookie='ignore"fake_cookie')
|
||||
self.mock_si = MagicMock(
|
||||
_stub=self.mock_stub)
|
||||
self.mock_ret = MagicMock()
|
||||
self.mock_new_stub = MagicMock()
|
||||
self.context_dict = {}
|
||||
patches = (('salt.utils.vmware.VmomiSupport.GetRequestContext',
|
||||
MagicMock(
|
||||
return_value=self.context_dict)),
|
||||
('salt.utils.vmware.SoapStubAdapter',
|
||||
MagicMock(return_value=self.mock_new_stub)))
|
||||
for mod, mock in patches:
|
||||
patcher = patch(mod, mock)
|
||||
patcher.start()
|
||||
self.addCleanup(patcher.stop)
|
||||
|
||||
type(salt.utils.vmware.sys).version_info = \
|
||||
PropertyMock(return_value=(2, 7, 9))
|
||||
self.mock_context = MagicMock()
|
||||
self.mock_create_default_context = \
|
||||
MagicMock(return_value=self.mock_context)
|
||||
salt.utils.vmware.ssl.create_default_context = \
|
||||
self.mock_create_default_context
|
||||
|
||||
def tearDown(self):
|
||||
for attr in ('mock_stub', 'mock_si', 'mock_ret', 'mock_new_stub',
|
||||
'context_dict', 'mock_context',
|
||||
'mock_create_default_context'):
|
||||
delattr(self, attr)
|
||||
|
||||
def test_ssl_default_context_loaded(self):
|
||||
salt.utils.vmware.get_new_service_instance_stub(
|
||||
self.mock_si, 'fake_path')
|
||||
self.mock_create_default_context.assert_called_once_with()
|
||||
self.assertFalse(self.mock_context.check_hostname)
|
||||
self.assertEqual(self.mock_context.verify_mode,
|
||||
salt.utils.vmware.ssl.CERT_NONE)
|
||||
|
||||
def test_ssl_default_context_not_loaded(self):
|
||||
type(salt.utils.vmware.sys).version_info = \
|
||||
PropertyMock(return_value=(2, 7, 8))
|
||||
salt.utils.vmware.get_new_service_instance_stub(
|
||||
self.mock_si, 'fake_path')
|
||||
self.assertEqual(self.mock_create_default_context.call_count, 0)
|
||||
|
||||
def test_session_cookie_in_context(self):
|
||||
salt.utils.vmware.get_new_service_instance_stub(
|
||||
self.mock_si, 'fake_path')
|
||||
self.assertEqual(self.context_dict['vcSessionCookie'], 'fake_cookie')
|
||||
|
||||
def test_get_new_stub(self):
|
||||
mock_get_new_stub = MagicMock()
|
||||
with patch('salt.utils.vmware.SoapStubAdapter', mock_get_new_stub):
|
||||
salt.utils.vmware.get_new_service_instance_stub(
|
||||
self.mock_si, 'fake_path', 'fake_ns', 'fake_version')
|
||||
mock_get_new_stub.assert_called_once_with(
|
||||
host='fake_host', ns='fake_ns', path='fake_path',
|
||||
version='fake_version', poolSize=0, sslContext=self.mock_context)
|
||||
|
||||
def test_get_new_stub_2_7_8_python(self):
|
||||
type(salt.utils.vmware.sys).version_info = \
|
||||
PropertyMock(return_value=(2, 7, 8))
|
||||
mock_get_new_stub = MagicMock()
|
||||
with patch('salt.utils.vmware.SoapStubAdapter', mock_get_new_stub):
|
||||
salt.utils.vmware.get_new_service_instance_stub(
|
||||
self.mock_si, 'fake_path', 'fake_ns', 'fake_version')
|
||||
mock_get_new_stub.assert_called_once_with(
|
||||
host='fake_host', ns='fake_ns', path='fake_path',
|
||||
version='fake_version', poolSize=0, sslContext=None)
|
||||
|
||||
def test_new_stub_returned(self):
|
||||
ret = salt.utils.vmware.get_new_service_instance_stub(
|
||||
self.mock_si, 'fake_path')
|
||||
self.assertEqual(self.mock_new_stub.cookie, 'ignore"fake_cookie')
|
||||
self.assertEqual(ret, self.mock_new_stub)
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
class GetServiceInstanceFromManagedObjectTestCase(TestCase):
|
||||
|
|
|
@ -14,6 +14,7 @@ from tests.support.unit import TestCase, skipIf
|
|||
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, MagicMock
|
||||
|
||||
# Import Salt libraries
|
||||
from salt.exceptions import ArgumentValueError
|
||||
import salt.utils.vmware
|
||||
# Import Third Party Libs
|
||||
try:
|
||||
|
@ -46,14 +47,22 @@ class GetHostsTestCase(TestCase):
|
|||
self.mock_host1, self.mock_host2, self.mock_host3 = MagicMock(), \
|
||||
MagicMock(), MagicMock()
|
||||
self.mock_prop_host1 = {'name': 'fake_hostname1',
|
||||
'object': self.mock_host1}
|
||||
'object': self.mock_host1}
|
||||
self.mock_prop_host2 = {'name': 'fake_hostname2',
|
||||
'object': self.mock_host2}
|
||||
'object': self.mock_host2}
|
||||
self.mock_prop_host3 = {'name': 'fake_hostname3',
|
||||
'object': self.mock_host3}
|
||||
'object': self.mock_host3}
|
||||
self.mock_prop_hosts = [self.mock_prop_host1, self.mock_prop_host2,
|
||||
self.mock_prop_host3]
|
||||
|
||||
def test_cluster_no_datacenter(self):
|
||||
with self.assertRaises(ArgumentValueError) as excinfo:
|
||||
salt.utils.vmware.get_hosts(self.mock_si,
|
||||
cluster_name='fake_cluster')
|
||||
self.assertEqual(excinfo.exception.strerror,
|
||||
'Must specify the datacenter when specifying the '
|
||||
'cluster')
|
||||
|
||||
def test_get_si_no_datacenter_no_cluster(self):
|
||||
mock_get_mors = MagicMock()
|
||||
mock_get_root_folder = MagicMock(return_value=self.mock_root_folder)
|
||||
|
@ -124,23 +133,20 @@ class GetHostsTestCase(TestCase):
|
|||
self.assertEqual(res, [])
|
||||
|
||||
def test_filter_cluster(self):
|
||||
cluster1 = vim.ClusterComputeResource('fake_good_cluster')
|
||||
cluster2 = vim.ClusterComputeResource('fake_bad_cluster')
|
||||
# Mock cluster1.name and cluster2.name
|
||||
cluster1._stub = MagicMock(InvokeAccessor=MagicMock(
|
||||
return_value='fake_good_cluster'))
|
||||
cluster2._stub = MagicMock(InvokeAccessor=MagicMock(
|
||||
return_value='fake_bad_cluster'))
|
||||
self.mock_prop_host1['parent'] = cluster2
|
||||
self.mock_prop_host2['parent'] = cluster1
|
||||
self.mock_prop_host3['parent'] = cluster1
|
||||
self.mock_prop_host1['parent'] = vim.ClusterComputeResource('cluster')
|
||||
self.mock_prop_host2['parent'] = vim.ClusterComputeResource('cluster')
|
||||
self.mock_prop_host3['parent'] = vim.Datacenter('dc')
|
||||
mock_get_cl_name = MagicMock(
|
||||
side_effect=['fake_bad_cluster', 'fake_good_cluster'])
|
||||
with patch('salt.utils.vmware.get_mors_with_properties',
|
||||
MagicMock(return_value=self.mock_prop_hosts)):
|
||||
res = salt.utils.vmware.get_hosts(self.mock_si,
|
||||
datacenter_name='fake_datacenter',
|
||||
cluster_name='fake_good_cluster',
|
||||
get_all_hosts=True)
|
||||
self.assertEqual(res, [self.mock_host2, self.mock_host3])
|
||||
with patch('salt.utils.vmware.get_managed_object_name',
|
||||
mock_get_cl_name):
|
||||
res = salt.utils.vmware.get_hosts(
|
||||
self.mock_si, datacenter_name='fake_datacenter',
|
||||
cluster_name='fake_good_cluster', get_all_hosts=True)
|
||||
self.assertEqual(mock_get_cl_name.call_count, 2)
|
||||
self.assertEqual(res, [self.mock_host2])
|
||||
|
||||
def test_no_hosts(self):
|
||||
with patch('salt.utils.vmware.get_mors_with_properties',
|
||||
|
|
Loading…
Add table
Reference in a new issue