removing azure code from repo

This commit is contained in:
natalieswork 2023-05-22 14:21:25 -04:00
parent b897734f4a
commit 15849a5911
38 changed files with 0 additions and 20106 deletions

View file

@ -1,5 +0,0 @@
salt.cloud.clouds.azurearm
==========================
.. automodule:: salt.cloud.clouds.azurearm
:members:

View file

@ -1,5 +0,0 @@
salt.cloud.clouds.msazure
=========================
.. automodule:: salt.cloud.clouds.msazure
:members:

View file

@ -1,4 +0,0 @@
salt.fileserver.azurefs
=======================
.. automodule:: salt.fileserver.azurefs

View file

@ -1,5 +0,0 @@
salt.grains.metadata_azure
==========================
.. automodule:: salt.grains.metadata_azure
:members:

View file

@ -1,5 +0,0 @@
salt.modules.azurearm_compute
=============================
.. automodule:: salt.modules.azurearm_compute
:members:

View file

@ -1,6 +0,0 @@
salt.modules.azurearm_dns
=========================
.. automodule:: salt.modules.azurearm_dns
:members:
:undoc-members:

View file

@ -1,5 +0,0 @@
salt.modules.azurearm_network
=============================
.. automodule:: salt.modules.azurearm_network
:members:

View file

@ -1,5 +0,0 @@
salt.modules.azurearm_resource
==============================
.. automodule:: salt.modules.azurearm_resource
:members:

View file

@ -1,5 +0,0 @@
salt.pillar.azureblob
=====================
.. automodule:: salt.pillar.azureblob
:members:

View file

@ -1,5 +0,0 @@
salt.states.azurearm_compute
============================
.. automodule:: salt.states.azurearm_compute
:members:

View file

@ -1,5 +0,0 @@
salt.states.azurearm_dns
========================
.. automodule:: salt.states.azurearm_dns
:members:

View file

@ -1,5 +0,0 @@
salt.states.azurearm_network
============================
.. automodule:: salt.states.azurearm_network
:members:

View file

@ -1,5 +0,0 @@
salt.states.azurearm_resource
=============================
.. automodule:: salt.states.azurearm_resource
:members:

File diff suppressed because it is too large Load diff

View file

@ -1,486 +0,0 @@
==============================
Getting Started With Azure ARM
==============================
.. versionadded:: 2016.11.0
.. warning::
This cloud provider will be removed from Salt in version 3007 in favor of
the `saltext.azurerm Salt Extension
<https://github.com/salt-extensions/saltext-azurerm>`_
Azure is a cloud service by Microsoft providing virtual machines, SQL services,
media services, and more. Azure ARM (aka, the Azure Resource Manager) is a next
generation version of the Azure portal and API. This document describes how to
use Salt Cloud to create a virtual machine on Azure ARM, with Salt installed.
More information about Azure is located at `http://www.windowsazure.com/
<http://www.windowsazure.com/>`_.
Dependencies
============
* `azure <https://pypi.org/project/azure>`_ >= 2.0.0rc6
* `azure-common <https://pypi.org/project/azure-common>`_ >= 1.1.4
* `azure-mgmt <https://pypi.org/project/azure-mgmt>`_ >= 0.30.0rc6
* `azure-mgmt-compute <https://pypi.org/project/azure-mgmt-compute>`_ >= 0.33.0
* `azure-mgmt-network <https://pypi.org/project/azure-mgmt-network>`_ >= 0.30.0rc6
* `azure-mgmt-resource <https://pypi.org/project/azure-mgmt-resource>`_ >= 0.30.0
* `azure-mgmt-storage <https://pypi.org/project/azure-mgmt-storage>`_ >= 0.30.0rc6
* `azure-mgmt-web <https://pypi.org/project/azure-mgmt-web>`_ >= 0.30.0rc6
* `azure-storage <https://pypi.org/project/azure-storage>`_ >= 0.32.0
* `msrestazure <https://pypi.org/project/msrestazure/>`_ >= 0.4.21
* A Microsoft Azure account
* `Salt <https://github.com/saltstack/salt>`_
Installation Tips
=================
Because the ``azure`` library requires the ``cryptography`` library, which is
compiled on-the-fly by ``pip``, you may need to install the development tools
for your operating system.
Before you install ``azure`` with ``pip``, you should make sure that the
required libraries are installed.
Debian
------
For Debian and Ubuntu, the following command will ensure that the required
dependencies are installed:
.. code-block:: bash
sudo apt-get install build-essential libssl-dev libffi-dev python-dev
Red Hat
-------
For Fedora and RHEL-derivatives, the following command will ensure that the
required dependencies are installed:
.. code-block:: bash
sudo yum install gcc libffi-devel python-devel openssl-devel
Configuration
=============
Set up the provider config at ``/etc/salt/cloud.providers.d/azurearm.conf``:
.. code-block:: yaml
# Note: This example is for /etc/salt/cloud.providers.d/azurearm.conf
my-azurearm-config:
driver: azurearm
master: salt.example.com
subscription_id: 01234567-890a-bcde-f012-34567890abdc
# https://apps.dev.microsoft.com/#/appList
username: <username>@<subdomain>.onmicrosoft.com
password: verybadpass
location: westus
resource_group: my_rg
# Optional
network_resource_group: my_net_rg
cleanup_disks: True
cleanup_vhds: True
cleanup_data_disks: True
cleanup_interfaces: True
custom_data: 'This is custom data'
expire_publisher_cache: 604800 # 7 days
expire_offer_cache: 518400 # 6 days
expire_sku_cache: 432000 # 5 days
expire_version_cache: 345600 # 4 days
expire_group_cache: 14400 # 4 hours
expire_interface_cache: 3600 # 1 hour
expire_network_cache: 3600 # 1 hour
Cloud Profiles
==============
Set up an initial profile at ``/etc/salt/cloud.profiles``:
.. code-block:: yaml
azure-ubuntu-pass:
provider: my-azure-config
image: Canonical|UbuntuServer|14.04.5-LTS|14.04.201612050
size: Standard_D1_v2
location: eastus
ssh_username: azureuser
ssh_password: verybadpass
azure-ubuntu-key:
provider: my-azure-config
image: Canonical|UbuntuServer|14.04.5-LTS|14.04.201612050
size: Standard_D1_v2
location: eastus
ssh_username: azureuser
ssh_publickeyfile: /path/to/ssh_public_key.pub
azure-win2012:
provider: my-azure-config
image: MicrosoftWindowsServer|WindowsServer|2012-R2-Datacenter|latest
size: Standard_D1_v2
location: westus
win_username: azureuser
win_password: verybadpass
These options are described in more detail below. Once configured, the profile
can be realized with a salt command:
.. code-block:: bash
salt-cloud -p azure-ubuntu newinstance
This will create an salt minion instance named ``newinstance`` in Azure. If
the command was executed on the salt-master, its Salt key will automatically
be signed on the master.
Once the instance has been created with salt-minion installed, connectivity to
it can be verified with Salt:
.. code-block:: bash
salt newinstance test.version
Profile Options
===============
The following options are currently available for Azure ARM.
provider
--------
The name of the provider as configured in
`/etc/salt/cloud.providers.d/azure.conf`.
image
-----
Required. The name of the image to use to create a VM. Available images can be
viewed using the following command:
.. code-block:: bash
salt-cloud --list-images my-azure-config
As you will see in ``--list-images``, image names are comprised of the following
fields, separated by the pipe (``|``) character:
.. code-block:: yaml
publisher: For example, Canonical or MicrosoftWindowsServer
offer: For example, UbuntuServer or WindowsServer
sku: Such as 14.04.5-LTS or 2012-R2-Datacenter
version: Such as 14.04.201612050 or latest
It is possible to specify the URL or resource ID path of a custom image that you
have access to, such as:
.. code-block:: yaml
https://<mystorage>.blob.core.windows.net/system/Microsoft.Compute/Images/<mystorage>/template-osDisk.01234567-890a-bcdef0123-4567890abcde.vhd
or:
.. code-block:: yaml
/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/myRG/providers/Microsoft.Compute/images/myImage
size
----
Required. The name of the size to use to create a VM. Available sizes can be
viewed using the following command:
.. code-block:: bash
salt-cloud --list-sizes my-azure-config
location
--------
Required. The name of the location to create a VM in. Available locations can
be viewed using the following command:
.. code-block:: bash
salt-cloud --list-locations my-azure-config
ssh_username
------------
Required for Linux. The admin user to add on the instance. It is also used to log
into the newly-created VM to install Salt.
ssh_keyfile
-----------
Required if using SSH key authentication. The path on the Salt master to the SSH private
key used during the minion bootstrap process.
ssh_publickeyfile
-----------------
Use either ``ssh_publickeyfile`` or ``ssh_password``. The path on the Salt master to the
SSH public key which will be pushed to the Linux VM.
ssh_password
------------
Use either ``ssh_publickeyfile`` or ``ssh_password``. The password for the admin user on
the newly-created Linux virtual machine.
win_username
------------
Required for Windows. The user to use to log into the newly-created Windows VM
to install Salt.
win_password
------------
Required for Windows. The password to use to log into the newly-created Windows
VM to install Salt.
win_installer
-------------
Required for Windows. The path to the Salt installer to be uploaded.
resource_group
--------------
Required. The resource group that all VM resources (VM, network interfaces,
etc) will be created in.
network_resource_group
----------------------
Optional. If specified, then the VM will be connected to the virtual network
in this resource group, rather than the parent resource group of the instance.
The VM interfaces and IPs will remain in the configured ``resource_group`` with
the VM.
network
-------
Required. The virtual network that the VM will be spun up in.
subnet
------
Optional. The subnet inside the virtual network that the VM will be spun up in.
Default is ``default``.
allocate_public_ip
------------------
Optional. Default is ``False``. If set to ``True``, a public IP will
be created and assigned to the VM.
load_balancer
-------------
Optional. The load-balancer for the VM's network interface to join. If
specified the backend_pool option need to be set.
backend_pool
------------
Optional. Required if the load_balancer option is set. The load-balancer's
Backend Pool the VM's network interface will join.
iface_name
----------
Optional. The name to apply to the VM's network interface. If not supplied, the
value will be set to ``<VM name>-iface0``.
dns_servers
-----------
Optional. A **list** of the DNS servers to configure for the network interface
(will be set on the VM by the DHCP of the VNET).
.. code-block:: yaml
my-azurearm-profile:
provider: azurearm-provider
network: mynetwork
dns_servers:
- 10.1.1.4
- 10.1.1.5
availability_set
----------------
Optional. If set, the VM will be added to the specified availability set.
volumes
-------
Optional. A list of dictionaries describing data disks to attach to the
instance can be specified using this setting. The data disk dictionaries are
passed entirely to the `Azure DataDisk object
<https://docs.microsoft.com/en-us/python/api/azure.mgmt.compute.v2017_12_01.models.datadisk?view=azure-python>`_,
so ad-hoc options can be handled as long as they are valid properties of the
object.
.. code-block:: yaml
volumes:
- disk_size_gb: 50
caching: ReadWrite
- disk_size_gb: 100
caching: ReadWrite
managed_disk:
storage_account_type: Standard_LRS
cleanup_disks
-------------
Optional. Default is ``False``. If set to ``True``, disks will be cleaned up
when the VM that they belong to is deleted.
cleanup_vhds
------------
Optional. Default is ``False``. If set to ``True``, VHDs will be cleaned up
when the VM and disk that they belong to are deleted. Requires ``cleanup_disks``
to be set to ``True``.
cleanup_data_disks
------------------
Optional. Default is ``False``. If set to ``True``, data disks (non-root
volumes) will be cleaned up whtn the VM that they are attached to is deleted.
Requires ``cleanup_disks`` to be set to ``True``.
cleanup_interfaces
------------------
Optional. Default is ``False``. Normally when a VM is deleted, its associated
interfaces and IPs are retained. This is useful if you expect the deleted VM
to be recreated with the same name and network settings. If you would like
interfaces and IPs to be deleted when their associated VM is deleted, set this
to ``True``.
userdata
--------
Optional. Any custom cloud data that needs to be specified. How this data is
used depends on the operating system and image that is used. For instance,
Linux images that use ``cloud-init`` will import this data for use with that
program. Some Windows images will create a file with a copy of this data, and
others will ignore it. If a Windows image creates a file, then the location
will depend upon the version of Windows. This will be ignored if the
``userdata_file`` is specified.
userdata_file
-------------
Optional. The path to a file to be read and submitted to Azure as user data.
How this is used depends on the operating system that is being deployed. If
used, any ``userdata`` setting will be ignored.
userdata_sendkeys
-----------------
Optional. Set to ``True`` in order to generate salt minion keys and provide
them as variables to the userdata script when running it through the template
renderer. The keys can be referenced as ``{{opts['priv_key']}}`` and
``{{opts['pub_key']}}``.
userdata_template
-----------------
Optional. Enter the renderer, such as ``jinja``, to be used for the userdata
script template.
wait_for_ip_timeout
-------------------
Optional. Default is ``600``. When waiting for a VM to be created, Salt Cloud
will attempt to connect to the VM's IP address until it starts responding. This
setting specifies the maximum time to wait for a response.
wait_for_ip_interval
--------------------
Optional. Default is ``10``. How long to wait between attempts to connect to
the VM's IP.
wait_for_ip_interval_multiplier
-------------------------------
Optional. Default is ``1``. Increase the interval by this multiplier after
each request; helps with throttling.
expire_publisher_cache
----------------------
Optional. Default is ``604800``. When fetching image data using
``--list-images``, a number of web calls need to be made to the Azure ARM API.
This is normally very fast when performed using a VM that exists inside Azure
itself, but can be very slow when made from an external connection.
By default, the publisher data will be cached, and only updated every ``604800``
seconds (7 days). If you need the publisher cache to be updated at a different
frequency, change this setting. Setting it to ``0`` will turn off the publisher
cache.
expire_offer_cache
------------------
Optional. Default is ``518400``. See ``expire_publisher_cache`` for details on
why this exists.
By default, the offer data will be cached, and only updated every ``518400``
seconds (6 days). If you need the offer cache to be updated at a different
frequency, change this setting. Setting it to ``0`` will turn off the publiser
cache.
expire_sku_cache
----------------
Optional. Default is ``432000``. See ``expire_publisher_cache`` for details on
why this exists.
By default, the sku data will be cached, and only updated every ``432000``
seconds (5 days). If you need the sku cache to be updated at a different
frequency, change this setting. Setting it to ``0`` will turn off the sku
cache.
expire_version_cache
--------------------
Optional. Default is ``345600``. See ``expire_publisher_cache`` for details on
why this exists.
By default, the version data will be cached, and only updated every ``345600``
seconds (4 days). If you need the version cache to be updated at a different
frequency, change this setting. Setting it to ``0`` will turn off the version
cache.
expire_group_cache
------------------
Optional. Default is ``14400``. See ``expire_publisher_cache`` for details on
why this exists.
By default, the resource group data will be cached, and only updated every
``14400`` seconds (4 hours). If you need the resource group cache to be updated
at a different frequency, change this setting. Setting it to ``0`` will turn
off the resource group cache.
expire_interface_cache
----------------------
Optional. Default is ``3600``. See ``expire_publisher_cache`` for details on
why this exists.
By default, the interface data will be cached, and only updated every ``3600``
seconds (1 hour). If you need the interface cache to be updated at a different
frequency, change this setting. Setting it to ``0`` will turn off the interface
cache.
expire_network_cache
--------------------
Optional. Default is ``3600``. See ``expire_publisher_cache`` for details on
why this exists.
By default, the network data will be cached, and only updated every ``3600``
seconds (1 hour). If you need the network cache to be updated at a different
frequency, change this setting. Setting it to ``0`` will turn off the network
cache.
Other Options
=============
Other options relevant to Azure ARM.
storage_account
---------------
Required for actions involving an Azure storage account.
storage_key
-----------
Required for actions involving an Azure storage account.
Show Instance
=============
This action is a thin wrapper around ``--full-query``, which displays details on
a single instance only. In an environment with several machines, this will save
a user from having to sort through all instance data, just to examine a single
instance.
.. code-block:: bash
salt-cloud -a show_instance myinstance

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,396 +0,0 @@
"""
The backend for serving files from the Azure blob storage service.
.. versionadded:: 2015.8.0
To enable, add ``azurefs`` to the :conf_master:`fileserver_backend` option in
the Master config file.
.. code-block:: yaml
fileserver_backend:
- azurefs
Starting in Salt 2018.3.0, this fileserver requires the standalone Azure
Storage SDK for Python. Theoretically any version >= v0.20.0 should work, but
it was developed against the v0.33.0 version.
Each storage container will be mapped to an environment. By default, containers
will be mapped to the ``base`` environment. You can override this behavior with
the ``saltenv`` configuration option. You can have an unlimited number of
storage containers, and can have a storage container serve multiple
environments, or have multiple storage containers mapped to the same
environment. Normal first-found rules apply, and storage containers are
searched in the order they are defined.
You must have either an account_key or a sas_token defined for each container,
if it is private. If you use a sas_token, it must have READ and LIST
permissions.
.. code-block:: yaml
azurefs:
- account_name: my_storage
account_key: 'fNH9cRp0+qVIVYZ+5rnZAhHc9ycOUcJnHtzpfOr0W0sxrtL2KVLuMe1xDfLwmfed+JJInZaEdWVCPHD4d/oqeA=='
container_name: my_container
- account_name: my_storage
sas_token: 'ss=b&sp=&sv=2015-07-08&sig=cohxXabx8FQdXsSEHyUXMjsSfNH2tZ2OB97Ou44pkRE%3D&srt=co&se=2017-04-18T21%3A38%3A01Z'
container_name: my_dev_container
saltenv: dev
- account_name: my_storage
container_name: my_public_container
.. note::
Do not include the leading ? for sas_token if generated from the web
"""
import base64
import logging
import os
import shutil
import salt.fileserver
import salt.utils.files
import salt.utils.gzip_util
import salt.utils.hashutils
import salt.utils.json
import salt.utils.path
import salt.utils.stringutils
from salt.utils.versions import Version
try:
import azure.storage
if Version(azure.storage.__version__) < Version("0.20.0"):
raise ImportError("azure.storage.__version__ must be >= 0.20.0")
HAS_AZURE = True
except (ImportError, AttributeError):
HAS_AZURE = False
__virtualname__ = "azurefs"
log = logging.getLogger()
def __virtual__():
"""
Only load if defined in fileserver_backend and azure.storage is present
"""
if __virtualname__ not in __opts__["fileserver_backend"]:
return False
if not HAS_AZURE:
return False
if "azurefs" not in __opts__:
return False
if not _validate_config():
return False
return True
def find_file(path, saltenv="base", **kwargs):
"""
Search the environment for the relative path
"""
fnd = {"path": "", "rel": ""}
for container in __opts__.get("azurefs", []):
if container.get("saltenv", "base") != saltenv:
continue
full = os.path.join(_get_container_path(container), path)
if os.path.isfile(full) and not salt.fileserver.is_file_ignored(__opts__, path):
fnd["path"] = full
fnd["rel"] = path
try:
# Converting the stat result to a list, the elements of the
# list correspond to the following stat_result params:
# 0 => st_mode=33188
# 1 => st_ino=10227377
# 2 => st_dev=65026
# 3 => st_nlink=1
# 4 => st_uid=1000
# 5 => st_gid=1000
# 6 => st_size=1056233
# 7 => st_atime=1468284229
# 8 => st_mtime=1456338235
# 9 => st_ctime=1456338235
fnd["stat"] = list(os.stat(full))
except Exception: # pylint: disable=broad-except
pass
return fnd
return fnd
def envs():
"""
Each container configuration can have an environment setting, or defaults
to base
"""
saltenvs = []
for container in __opts__.get("azurefs", []):
saltenvs.append(container.get("saltenv", "base"))
# Remove duplicates
return list(set(saltenvs))
def serve_file(load, fnd):
"""
Return a chunk from a file based on the data received
"""
ret = {"data": "", "dest": ""}
required_load_keys = ("path", "loc", "saltenv")
if not all(x in load for x in required_load_keys):
log.debug(
"Not all of the required keys present in payload. Missing: %s",
", ".join(required_load_keys.difference(load)),
)
return ret
if not fnd["path"]:
return ret
ret["dest"] = fnd["rel"]
gzip = load.get("gzip", None)
fpath = os.path.normpath(fnd["path"])
with salt.utils.files.fopen(fpath, "rb") as fp_:
fp_.seek(load["loc"])
data = fp_.read(__opts__["file_buffer_size"])
if data and not salt.utils.files.is_binary(fpath):
data = data.decode(__salt_system_encoding__)
if gzip and data:
data = salt.utils.gzip_util.compress(data, gzip)
ret["gzip"] = gzip
ret["data"] = data
return ret
def update():
"""
Update caches of the storage containers.
Compares the md5 of the files on disk to the md5 of the blobs in the
container, and only updates if necessary.
Also processes deletions by walking the container caches and comparing
with the list of blobs in the container
"""
for container in __opts__["azurefs"]:
path = _get_container_path(container)
try:
if not os.path.exists(path):
os.makedirs(path)
elif not os.path.isdir(path):
shutil.rmtree(path)
os.makedirs(path)
except Exception as exc: # pylint: disable=broad-except
log.exception("Error occurred creating cache directory for azurefs")
continue
blob_service = _get_container_service(container)
name = container["container_name"]
try:
blob_list = blob_service.list_blobs(name)
except Exception as exc: # pylint: disable=broad-except
log.exception("Error occurred fetching blob list for azurefs")
continue
# Walk the cache directory searching for deletions
blob_names = [blob.name for blob in blob_list]
blob_set = set(blob_names)
for root, dirs, files in salt.utils.path.os_walk(path):
for f in files:
fname = os.path.join(root, f)
relpath = os.path.relpath(fname, path)
if relpath not in blob_set:
salt.fileserver.wait_lock(fname + ".lk", fname)
try:
os.unlink(fname)
except Exception: # pylint: disable=broad-except
pass
if not dirs and not files:
shutil.rmtree(root)
for blob in blob_list:
fname = os.path.join(path, blob.name)
update = False
if os.path.exists(fname):
# File exists, check the hashes
source_md5 = blob.properties.content_settings.content_md5
local_md5 = base64.b64encode(
salt.utils.hashutils.get_hash(fname, "md5").decode("hex")
)
if local_md5 != source_md5:
update = True
else:
update = True
if update:
if not os.path.exists(os.path.dirname(fname)):
os.makedirs(os.path.dirname(fname))
# Lock writes
lk_fn = fname + ".lk"
salt.fileserver.wait_lock(lk_fn, fname)
with salt.utils.files.fopen(lk_fn, "w"):
pass
try:
blob_service.get_blob_to_path(name, blob.name, fname)
except Exception as exc: # pylint: disable=broad-except
log.exception("Error occurred fetching blob from azurefs")
continue
# Unlock writes
try:
os.unlink(lk_fn)
except Exception: # pylint: disable=broad-except
pass
# Write out file list
container_list = path + ".list"
lk_fn = container_list + ".lk"
salt.fileserver.wait_lock(lk_fn, container_list)
with salt.utils.files.fopen(lk_fn, "w"):
pass
with salt.utils.files.fopen(container_list, "w") as fp_:
salt.utils.json.dump(blob_names, fp_)
try:
os.unlink(lk_fn)
except Exception: # pylint: disable=broad-except
pass
try:
hash_cachedir = os.path.join(__opts__["cachedir"], "azurefs", "hashes")
shutil.rmtree(hash_cachedir)
except Exception: # pylint: disable=broad-except
log.exception("Problem occurred trying to invalidate hash cach for azurefs")
def file_hash(load, fnd):
"""
Return a file hash based on the hash type set in the master config
"""
if not all(x in load for x in ("path", "saltenv")):
return "", None
ret = {"hash_type": __opts__["hash_type"]}
relpath = fnd["rel"]
path = fnd["path"]
hash_cachedir = os.path.join(__opts__["cachedir"], "azurefs", "hashes")
hashdest = salt.utils.path.join(
hash_cachedir,
load["saltenv"],
"{}.hash.{}".format(relpath, __opts__["hash_type"]),
)
if not os.path.isfile(hashdest):
if not os.path.exists(os.path.dirname(hashdest)):
os.makedirs(os.path.dirname(hashdest))
ret["hsum"] = salt.utils.hashutils.get_hash(path, __opts__["hash_type"])
with salt.utils.files.fopen(hashdest, "w+") as fp_:
fp_.write(salt.utils.stringutils.to_str(ret["hsum"]))
return ret
else:
with salt.utils.files.fopen(hashdest, "rb") as fp_:
ret["hsum"] = salt.utils.stringutils.to_unicode(fp_.read())
return ret
def file_list(load):
"""
Return a list of all files in a specified environment
"""
ret = set()
try:
for container in __opts__["azurefs"]:
if container.get("saltenv", "base") != load["saltenv"]:
continue
container_list = _get_container_path(container) + ".list"
lk = container_list + ".lk"
salt.fileserver.wait_lock(lk, container_list, 5)
if not os.path.exists(container_list):
continue
with salt.utils.files.fopen(container_list, "r") as fp_:
ret.update(set(salt.utils.json.load(fp_)))
except Exception as exc: # pylint: disable=broad-except
log.error(
"azurefs: an error ocurred retrieving file lists. "
"It should be resolved next time the fileserver "
"updates. Please do not manually modify the azurefs "
"cache directory."
)
return list(ret)
def dir_list(load):
"""
Return a list of all directories in a specified environment
"""
ret = set()
files = file_list(load)
for f in files:
dirname = f
while dirname:
dirname = os.path.dirname(dirname)
if dirname:
ret.add(dirname)
return list(ret)
def _get_container_path(container):
"""
Get the cache path for the container in question
Cache paths are generate by combining the account name, container name,
and saltenv, separated by underscores
"""
root = os.path.join(__opts__["cachedir"], "azurefs")
container_dir = "{}_{}_{}".format(
container.get("account_name", ""),
container.get("container_name", ""),
container.get("saltenv", "base"),
)
return os.path.join(root, container_dir)
def _get_container_service(container):
"""
Get the azure block blob service for the container in question
Try account_key, sas_token, and no auth in that order
"""
if "account_key" in container:
account = azure.storage.CloudStorageAccount(
container["account_name"], account_key=container["account_key"]
)
elif "sas_token" in container:
account = azure.storage.CloudStorageAccount(
container["account_name"], sas_token=container["sas_token"]
)
else:
account = azure.storage.CloudStorageAccount(container["account_name"])
blob_service = account.create_block_blob_service()
return blob_service
def _validate_config():
"""
Validate azurefs config, return False if it doesn't validate
"""
if not isinstance(__opts__["azurefs"], list):
log.error("azurefs configuration is not formed as a list, skipping azurefs")
return False
for container in __opts__["azurefs"]:
if not isinstance(container, dict):
log.error(
"One or more entries in the azurefs configuration list are "
"not formed as a dict. Skipping azurefs: %s",
container,
)
return False
if "account_name" not in container or "container_name" not in container:
log.error(
"An azurefs container configuration is missing either an "
"account_name or a container_name: %s",
container,
)
return False
return True

View file

@ -1,45 +0,0 @@
"""
Grains from cloud metadata servers at 169.254.169.254 in Azure Virtual Machine
.. versionadded:: 3006.0
:depends: requests
To enable these grains that pull from the http://169.254.169.254/metadata/instance?api-version=2020-09-01
metadata server set `metadata_server_grains: True` in the minion config.
.. code-block:: yaml
metadata_server_grains: True
"""
import logging
import salt.utils.http as http
import salt.utils.json
HOST = "http://169.254.169.254"
URL = f"{HOST}/metadata/instance?api-version=2020-09-01"
log = logging.getLogger(__name__)
def __virtual__():
# Check if metadata_server_grains minion option is enabled
if __opts__.get("metadata_server_grains", False) is False:
return False
azuretest = http.query(
URL, status=True, headers=True, header_list=["Metadata: true"]
)
if azuretest.get("status", 404) != 200:
return False
return True
def metadata():
"""Takes no arguments, returns a dictionary of metadata values from Azure."""
log.debug("All checks true - loading azure metadata")
result = http.query(URL, headers=True, header_list=["Metadata: true"])
metadata = salt.utils.json.loads(result.get("body", {}))
return metadata

View file

@ -1,754 +0,0 @@
"""
Azure (ARM) Compute Execution Module
.. versionadded:: 2019.2.0
.. warning::
This cloud provider will be removed from Salt in version 3007 in favor of
the `saltext.azurerm Salt Extension
<https://github.com/salt-extensions/saltext-azurerm>`_
:maintainer: <devops@eitr.tech>
:maturity: new
:depends:
* `azure <https://pypi.python.org/pypi/azure>`_ >= 2.0.0
* `azure-common <https://pypi.python.org/pypi/azure-common>`_ >= 1.1.8
* `azure-mgmt <https://pypi.python.org/pypi/azure-mgmt>`_ >= 1.0.0
* `azure-mgmt-compute <https://pypi.python.org/pypi/azure-mgmt-compute>`_ >= 1.0.0
* `azure-mgmt-network <https://pypi.python.org/pypi/azure-mgmt-network>`_ >= 1.7.1
* `azure-mgmt-resource <https://pypi.python.org/pypi/azure-mgmt-resource>`_ >= 1.1.0
* `azure-mgmt-storage <https://pypi.python.org/pypi/azure-mgmt-storage>`_ >= 1.0.0
* `azure-mgmt-web <https://pypi.python.org/pypi/azure-mgmt-web>`_ >= 0.32.0
* `azure-storage <https://pypi.python.org/pypi/azure-storage>`_ >= 0.34.3
* `msrestazure <https://pypi.python.org/pypi/msrestazure>`_ >= 0.4.21
:platform: linux
:configuration: This module requires Azure Resource Manager credentials to be passed as keyword arguments
to every function in order to work properly.
Required provider parameters:
if using username and password:
* ``subscription_id``
* ``username``
* ``password``
if using a service principal:
* ``subscription_id``
* ``tenant``
* ``client_id``
* ``secret``
Optional provider parameters:
**cloud_environment**: Used to point the cloud driver to different API endpoints, such as Azure GovCloud.
Possible values:
* ``AZURE_PUBLIC_CLOUD`` (default)
* ``AZURE_CHINA_CLOUD``
* ``AZURE_US_GOV_CLOUD``
* ``AZURE_GERMAN_CLOUD``
"""
# Python libs
import logging
from functools import wraps
import salt.utils.azurearm
# Azure libs
HAS_LIBS = False
try:
import azure.mgmt.compute.models # pylint: disable=unused-import
from msrest.exceptions import SerializationError
from msrestazure.azure_exceptions import CloudError
HAS_LIBS = True
except ImportError:
pass
__virtualname__ = "azurearm_compute"
log = logging.getLogger(__name__)
def __virtual__():
if not HAS_LIBS:
return (
False,
"The following dependencies are required to use the AzureARM modules: "
"Microsoft Azure SDK for Python >= 2.0rc6, "
"MS REST Azure (msrestazure) >= 0.4",
)
return __virtualname__
def _deprecation_message(function):
"""
Decorator wrapper to warn about azurearm deprecation
"""
@wraps(function)
def wrapped(*args, **kwargs):
salt.utils.versions.warn_until(
"Chlorine",
"The 'azurearm' functionality in Salt has been deprecated and its "
"functionality will be removed in version 3007 in favor of the "
"saltext.azurerm Salt Extension. "
"(https://github.com/salt-extensions/saltext-azurerm)",
category=FutureWarning,
)
ret = function(*args, **salt.utils.args.clean_kwargs(**kwargs))
return ret
return wrapped
@_deprecation_message
def availability_set_create_or_update(
name, resource_group, **kwargs
): # pylint: disable=invalid-name
"""
.. versionadded:: 2019.2.0
Create or update an availability set.
:param name: The availability set to create.
:param resource_group: The resource group name assigned to the
availability set.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.availability_set_create_or_update testset testgroup
"""
if "location" not in kwargs:
rg_props = __salt__["azurearm_resource.resource_group_get"](
resource_group, **kwargs
)
if "error" in rg_props:
log.error("Unable to determine location from resource group specified.")
return False
kwargs["location"] = rg_props["location"]
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
# Use VM names to link to the IDs of existing VMs.
if isinstance(kwargs.get("virtual_machines"), list):
vm_list = []
for vm_name in kwargs.get("virtual_machines"):
vm_instance = __salt__["azurearm_compute.virtual_machine_get"](
name=vm_name, resource_group=resource_group, **kwargs
)
if "error" not in vm_instance:
vm_list.append({"id": str(vm_instance["id"])})
kwargs["virtual_machines"] = vm_list
try:
setmodel = __utils__["azurearm.create_object_model"](
"compute", "AvailabilitySet", **kwargs
)
except TypeError as exc:
result = {"error": "The object model could not be built. ({})".format(str(exc))}
return result
try:
av_set = compconn.availability_sets.create_or_update(
resource_group_name=resource_group,
availability_set_name=name,
parameters=setmodel,
)
result = av_set.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
except SerializationError as exc:
result = {
"error": "The object model could not be parsed. ({})".format(str(exc))
}
return result
@_deprecation_message
def availability_set_delete(name, resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
Delete an availability set.
:param name: The availability set to delete.
:param resource_group: The resource group name assigned to the
availability set.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.availability_set_delete testset testgroup
"""
result = False
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
compconn.availability_sets.delete(
resource_group_name=resource_group, availability_set_name=name
)
result = True
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
return result
@_deprecation_message
def availability_set_get(name, resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
Get a dictionary representing an availability set's properties.
:param name: The availability set to get.
:param resource_group: The resource group name assigned to the
availability set.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.availability_set_get testset testgroup
"""
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
av_set = compconn.availability_sets.get(
resource_group_name=resource_group, availability_set_name=name
)
result = av_set.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def availability_sets_list(resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
List all availability sets within a resource group.
:param resource_group: The resource group name to list availability
sets within.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.availability_sets_list testgroup
"""
result = {}
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
avail_sets = __utils__["azurearm.paged_object_to_list"](
compconn.availability_sets.list(resource_group_name=resource_group)
)
for avail_set in avail_sets:
result[avail_set["name"]] = avail_set
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def availability_sets_list_available_sizes(
name, resource_group, **kwargs
): # pylint: disable=invalid-name
"""
.. versionadded:: 2019.2.0
List all available virtual machine sizes that can be used to
to create a new virtual machine in an existing availability set.
:param name: The availability set name to list available
virtual machine sizes within.
:param resource_group: The resource group name to list available
availability set sizes within.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.availability_sets_list_available_sizes testset testgroup
"""
result = {}
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
sizes = __utils__["azurearm.paged_object_to_list"](
compconn.availability_sets.list_available_sizes(
resource_group_name=resource_group, availability_set_name=name
)
)
for size in sizes:
result[size["name"]] = size
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machine_capture(
name, destination_name, resource_group, prefix="capture-", overwrite=False, **kwargs
):
"""
.. versionadded:: 2019.2.0
Captures the VM by copying virtual hard disks of the VM and outputs
a template that can be used to create similar VMs.
:param name: The name of the virtual machine.
:param destination_name: The destination container name.
:param resource_group: The resource group name assigned to the
virtual machine.
:param prefix: (Default: 'capture-') The captured virtual hard disk's name prefix.
:param overwrite: (Default: False) Overwrite the destination disk in case of conflict.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machine_capture testvm testcontainer testgroup
"""
# pylint: disable=invalid-name
VirtualMachineCaptureParameters = getattr(
azure.mgmt.compute.models, "VirtualMachineCaptureParameters"
)
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
# pylint: disable=invalid-name
vm = compconn.virtual_machines.capture(
resource_group_name=resource_group,
vm_name=name,
parameters=VirtualMachineCaptureParameters(
vhd_prefix=prefix,
destination_container_name=destination_name,
overwrite_vhds=overwrite,
),
)
vm.wait()
vm_result = vm.result()
result = vm_result.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machine_get(name, resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
Retrieves information about the model view or the instance view of a
virtual machine.
:param name: The name of the virtual machine.
:param resource_group: The resource group name assigned to the
virtual machine.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machine_get testvm testgroup
"""
expand = kwargs.get("expand")
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
# pylint: disable=invalid-name
vm = compconn.virtual_machines.get(
resource_group_name=resource_group, vm_name=name, expand=expand
)
result = vm.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machine_convert_to_managed_disks(
name, resource_group, **kwargs
): # pylint: disable=invalid-name
"""
.. versionadded:: 2019.2.0
Converts virtual machine disks from blob-based to managed disks. Virtual
machine must be stop-deallocated before invoking this operation.
:param name: The name of the virtual machine to convert.
:param resource_group: The resource group name assigned to the
virtual machine.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machine_convert_to_managed_disks testvm testgroup
"""
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
# pylint: disable=invalid-name
vm = compconn.virtual_machines.convert_to_managed_disks(
resource_group_name=resource_group, vm_name=name
)
vm.wait()
vm_result = vm.result()
result = vm_result.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machine_deallocate(name, resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
Power off a virtual machine and deallocate compute resources.
:param name: The name of the virtual machine to deallocate.
:param resource_group: The resource group name assigned to the
virtual machine.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machine_deallocate testvm testgroup
"""
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
# pylint: disable=invalid-name
vm = compconn.virtual_machines.deallocate(
resource_group_name=resource_group, vm_name=name
)
vm.wait()
vm_result = vm.result()
result = vm_result.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machine_generalize(name, resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
Set the state of a virtual machine to 'generalized'.
:param name: The name of the virtual machine.
:param resource_group: The resource group name assigned to the
virtual machine.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machine_generalize testvm testgroup
"""
result = False
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
compconn.virtual_machines.generalize(
resource_group_name=resource_group, vm_name=name
)
result = True
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
return result
@_deprecation_message
def virtual_machines_list(resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
List all virtual machines within a resource group.
:param resource_group: The resource group name to list virtual
machines within.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machines_list testgroup
"""
result = {}
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
vms = __utils__["azurearm.paged_object_to_list"](
compconn.virtual_machines.list(resource_group_name=resource_group)
)
for vm in vms: # pylint: disable=invalid-name
result[vm["name"]] = vm
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machines_list_all(**kwargs):
"""
.. versionadded:: 2019.2.0
List all virtual machines within a subscription.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machines_list_all
"""
result = {}
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
vms = __utils__["azurearm.paged_object_to_list"](
compconn.virtual_machines.list_all()
)
for vm in vms: # pylint: disable=invalid-name
result[vm["name"]] = vm
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machines_list_available_sizes(
name, resource_group, **kwargs
): # pylint: disable=invalid-name
"""
.. versionadded:: 2019.2.0
Lists all available virtual machine sizes to which the specified virtual
machine can be resized.
:param name: The name of the virtual machine.
:param resource_group: The resource group name assigned to the
virtual machine.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machines_list_available_sizes testvm testgroup
"""
result = {}
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
sizes = __utils__["azurearm.paged_object_to_list"](
compconn.virtual_machines.list_available_sizes(
resource_group_name=resource_group, vm_name=name
)
)
for size in sizes:
result[size["name"]] = size
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machine_power_off(name, resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
Power off (stop) a virtual machine.
:param name: The name of the virtual machine to stop.
:param resource_group: The resource group name assigned to the
virtual machine.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machine_power_off testvm testgroup
"""
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
# pylint: disable=invalid-name
vm = compconn.virtual_machines.power_off(
resource_group_name=resource_group, vm_name=name
)
vm.wait()
vm_result = vm.result()
result = vm_result.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machine_restart(name, resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
Restart a virtual machine.
:param name: The name of the virtual machine to restart.
:param resource_group: The resource group name assigned to the
virtual machine.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machine_restart testvm testgroup
"""
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
# pylint: disable=invalid-name
vm = compconn.virtual_machines.restart(
resource_group_name=resource_group, vm_name=name
)
vm.wait()
vm_result = vm.result()
result = vm_result.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machine_start(name, resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
Power on (start) a virtual machine.
:param name: The name of the virtual machine to start.
:param resource_group: The resource group name assigned to the
virtual machine.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machine_start testvm testgroup
"""
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
# pylint: disable=invalid-name
vm = compconn.virtual_machines.start(
resource_group_name=resource_group, vm_name=name
)
vm.wait()
vm_result = vm.result()
result = vm_result.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def virtual_machine_redeploy(name, resource_group, **kwargs):
"""
.. versionadded:: 2019.2.0
Redeploy a virtual machine.
:param name: The name of the virtual machine to redeploy.
:param resource_group: The resource group name assigned to the
virtual machine.
CLI Example:
.. code-block:: bash
salt-call azurearm_compute.virtual_machine_redeploy testvm testgroup
"""
compconn = __utils__["azurearm.get_client"]("compute", **kwargs)
try:
# pylint: disable=invalid-name
vm = compconn.virtual_machines.redeploy(
resource_group_name=resource_group, vm_name=name
)
vm.wait()
vm_result = vm.result()
result = vm_result.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("compute", str(exc), **kwargs)
result = {"error": str(exc)}
return result

View file

@ -1,552 +0,0 @@
"""
Azure (ARM) DNS Execution Module
.. versionadded:: 3000
.. warning::
This cloud provider will be removed from Salt in version 3007 in favor of
the `saltext.azurerm Salt Extension
<https://github.com/salt-extensions/saltext-azurerm>`_
:maintainer: <devops@eitr.tech>
:maturity: new
:depends:
* `azure <https://pypi.python.org/pypi/azure>`_ >= 2.0.0
* `azure-common <https://pypi.python.org/pypi/azure-common>`_ >= 1.1.8
* `azure-mgmt <https://pypi.python.org/pypi/azure-mgmt>`_ >= 1.0.0
* `azure-mgmt-compute <https://pypi.python.org/pypi/azure-mgmt-compute>`_ >= 1.0.0
* `azure-mgmt-dns <https://pypi.python.org/pypi/azure-mgmt-dns>`_ >= 2.0.0rc1
* `azure-mgmt-network <https://pypi.python.org/pypi/azure-mgmt-network>`_ >= 1.7.1
* `azure-mgmt-resource <https://pypi.python.org/pypi/azure-mgmt-resource>`_ >= 1.1.0
* `azure-mgmt-storage <https://pypi.python.org/pypi/azure-mgmt-storage>`_ >= 1.0.0
* `azure-mgmt-web <https://pypi.python.org/pypi/azure-mgmt-web>`_ >= 0.32.0
* `azure-storage <https://pypi.python.org/pypi/azure-storage>`_ >= 0.34.3
* `msrestazure <https://pypi.python.org/pypi/msrestazure>`_ >= 0.4.21
:platform: linux
:configuration:
This module requires Azure Resource Manager credentials to be passed as keyword arguments
to every function in order to work properly.
Required provider parameters:
if using username and password:
* ``subscription_id``
* ``username``
* ``password``
if using a service principal:
* ``subscription_id``
* ``tenant``
* ``client_id``
* ``secret``
Optional provider parameters:
**cloud_environment**: Used to point the cloud driver to different API endpoints, such as Azure GovCloud.
Possible values:
* ``AZURE_PUBLIC_CLOUD`` (default)
* ``AZURE_CHINA_CLOUD``
* ``AZURE_US_GOV_CLOUD``
* ``AZURE_GERMAN_CLOUD``
"""
# Python libs
import logging
from functools import wraps
import salt.utils.azurearm
# Azure libs
HAS_LIBS = False
try:
import azure.mgmt.dns.models # pylint: disable=unused-import
from msrest.exceptions import SerializationError
from msrestazure.azure_exceptions import CloudError
HAS_LIBS = True
except ImportError:
pass
__virtualname__ = "azurearm_dns"
log = logging.getLogger(__name__)
def __virtual__():
if not HAS_LIBS:
return (
False,
"The following dependencies are required to use the AzureARM modules: "
"Microsoft Azure SDK for Python >= 2.0rc6, "
"MS REST Azure (msrestazure) >= 0.4",
)
return __virtualname__
def _deprecation_message(function):
"""
Decorator wrapper to warn about azurearm deprecation
"""
@wraps(function)
def wrapped(*args, **kwargs):
salt.utils.versions.warn_until(
"Chlorine",
"The 'azurearm' functionality in Salt has been deprecated and its "
"functionality will be removed in version 3007 in favor of the "
"saltext.azurerm Salt Extension. "
"(https://github.com/salt-extensions/saltext-azurerm)",
category=FutureWarning,
)
ret = function(*args, **salt.utils.args.clean_kwargs(**kwargs))
return ret
return wrapped
@_deprecation_message
def record_set_create_or_update(name, zone_name, resource_group, record_type, **kwargs):
"""
.. versionadded:: 3000
Creates or updates a record set within a DNS zone.
:param name: The name of the record set, relative to the name of the zone.
:param zone_name: The name of the DNS zone (without a terminating dot).
:param resource_group: The name of the resource group.
:param record_type:
The type of DNS record in this record set. Record sets of type SOA can be
updated but not created (they are created when the DNS zone is created).
Possible values include: 'A', 'AAAA', 'CAA', 'CNAME', 'MX', 'NS', 'PTR', 'SOA', 'SRV', 'TXT'
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.record_set_create_or_update myhost myzone testgroup A
arecords='[{ipv4_address: 10.0.0.1}]' ttl=300
"""
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
try:
record_set_model = __utils__["azurearm.create_object_model"](
"dns", "RecordSet", **kwargs
)
except TypeError as exc:
result = {"error": "The object model could not be built. ({})".format(str(exc))}
return result
try:
record_set = dnsconn.record_sets.create_or_update(
relative_record_set_name=name,
zone_name=zone_name,
resource_group_name=resource_group,
record_type=record_type,
parameters=record_set_model,
if_match=kwargs.get("if_match"),
if_none_match=kwargs.get("if_none_match"),
)
result = record_set.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
result = {"error": str(exc)}
except SerializationError as exc:
result = {
"error": "The object model could not be parsed. ({})".format(str(exc))
}
return result
@_deprecation_message
def record_set_delete(name, zone_name, resource_group, record_type, **kwargs):
"""
.. versionadded:: 3000
Deletes a record set from a DNS zone. This operation cannot be undone.
:param name: The name of the record set, relative to the name of the zone.
:param zone_name: The name of the DNS zone (without a terminating dot).
:param resource_group: The name of the resource group.
:param record_type:
The type of DNS record in this record set. Record sets of type SOA cannot be
deleted (they are deleted when the DNS zone is deleted).
Possible values include: 'A', 'AAAA', 'CAA', 'CNAME', 'MX', 'NS', 'PTR', 'SOA', 'SRV', 'TXT'
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.record_set_delete myhost myzone testgroup A
"""
result = False
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
try:
record_set = dnsconn.record_sets.delete(
relative_record_set_name=name,
zone_name=zone_name,
resource_group_name=resource_group,
record_type=record_type,
if_match=kwargs.get("if_match"),
)
result = True
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
return result
@_deprecation_message
def record_set_get(name, zone_name, resource_group, record_type, **kwargs):
"""
.. versionadded:: 3000
Get a dictionary representing a record set's properties.
:param name: The name of the record set, relative to the name of the zone.
:param zone_name: The name of the DNS zone (without a terminating dot).
:param resource_group: The name of the resource group.
:param record_type:
The type of DNS record in this record set.
Possible values include: 'A', 'AAAA', 'CAA', 'CNAME', 'MX', 'NS', 'PTR', 'SOA', 'SRV', 'TXT'
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.record_set_get '@' myzone testgroup SOA
"""
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
try:
record_set = dnsconn.record_sets.get(
relative_record_set_name=name,
zone_name=zone_name,
resource_group_name=resource_group,
record_type=record_type,
)
result = record_set.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def record_sets_list_by_type(
zone_name, resource_group, record_type, top=None, recordsetnamesuffix=None, **kwargs
):
"""
.. versionadded:: 3000
Lists the record sets of a specified type in a DNS zone.
:param zone_name: The name of the DNS zone (without a terminating dot).
:param resource_group: The name of the resource group.
:param record_type:
The type of record sets to enumerate.
Possible values include: 'A', 'AAAA', 'CAA', 'CNAME', 'MX', 'NS', 'PTR', 'SOA', 'SRV', 'TXT'
:param top:
The maximum number of record sets to return. If not specified,
returns up to 100 record sets.
:param recordsetnamesuffix:
The suffix label of the record set name that has
to be used to filter the record set enumerations.
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.record_sets_list_by_type myzone testgroup SOA
"""
result = {}
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
try:
record_sets = __utils__["azurearm.paged_object_to_list"](
dnsconn.record_sets.list_by_type(
zone_name=zone_name,
resource_group_name=resource_group,
record_type=record_type,
top=top,
recordsetnamesuffix=recordsetnamesuffix,
)
)
for record_set in record_sets:
result[record_set["name"]] = record_set
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def record_sets_list_by_dns_zone(
zone_name, resource_group, top=None, recordsetnamesuffix=None, **kwargs
):
"""
.. versionadded:: 3000
Lists all record sets in a DNS zone.
:param zone_name: The name of the DNS zone (without a terminating dot).
:param resource_group: The name of the resource group.
:param top:
The maximum number of record sets to return. If not specified,
returns up to 100 record sets.
:param recordsetnamesuffix:
The suffix label of the record set name that has
to be used to filter the record set enumerations.
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.record_sets_list_by_dns_zone myzone testgroup
"""
result = {}
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
try:
record_sets = __utils__["azurearm.paged_object_to_list"](
dnsconn.record_sets.list_by_dns_zone(
zone_name=zone_name,
resource_group_name=resource_group,
top=top,
recordsetnamesuffix=recordsetnamesuffix,
)
)
for record_set in record_sets:
result[record_set["name"]] = record_set
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def zone_create_or_update(name, resource_group, **kwargs):
"""
.. versionadded:: 3000
Creates or updates a DNS zone. Does not modify DNS records within the zone.
:param name: The name of the DNS zone to create (without a terminating dot).
:param resource_group: The name of the resource group.
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.zone_create_or_update myzone testgroup
"""
# DNS zones are global objects
kwargs["location"] = "global"
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
# Convert list of ID strings to list of dictionaries with id key.
if isinstance(kwargs.get("registration_virtual_networks"), list):
kwargs["registration_virtual_networks"] = [
{"id": vnet} for vnet in kwargs["registration_virtual_networks"]
]
if isinstance(kwargs.get("resolution_virtual_networks"), list):
kwargs["resolution_virtual_networks"] = [
{"id": vnet} for vnet in kwargs["resolution_virtual_networks"]
]
try:
zone_model = __utils__["azurearm.create_object_model"]("dns", "Zone", **kwargs)
except TypeError as exc:
result = {"error": "The object model could not be built. ({})".format(str(exc))}
return result
try:
zone = dnsconn.zones.create_or_update(
zone_name=name,
resource_group_name=resource_group,
parameters=zone_model,
if_match=kwargs.get("if_match"),
if_none_match=kwargs.get("if_none_match"),
)
result = zone.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
result = {"error": str(exc)}
except SerializationError as exc:
result = {
"error": "The object model could not be parsed. ({})".format(str(exc))
}
return result
@_deprecation_message
def zone_delete(name, resource_group, **kwargs):
"""
.. versionadded:: 3000
Delete a DNS zone within a resource group.
:param name: The name of the DNS zone to delete.
:param resource_group: The name of the resource group.
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.zone_delete myzone testgroup
"""
result = False
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
try:
zone = dnsconn.zones.delete(
zone_name=name,
resource_group_name=resource_group,
if_match=kwargs.get("if_match"),
)
zone.wait()
result = True
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
return result
@_deprecation_message
def zone_get(name, resource_group, **kwargs):
"""
.. versionadded:: 3000
Get a dictionary representing a DNS zone's properties, but not the
record sets within the zone.
:param name: The DNS zone to get.
:param resource_group: The name of the resource group.
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.zone_get myzone testgroup
"""
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
try:
zone = dnsconn.zones.get(zone_name=name, resource_group_name=resource_group)
result = zone.as_dict()
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def zones_list_by_resource_group(resource_group, top=None, **kwargs):
"""
.. versionadded:: 3000
Lists the DNS zones in a resource group.
:param resource_group: The name of the resource group.
:param top:
The maximum number of DNS zones to return. If not specified,
returns up to 100 zones.
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.zones_list_by_resource_group testgroup
"""
result = {}
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
try:
zones = __utils__["azurearm.paged_object_to_list"](
dnsconn.zones.list_by_resource_group(
resource_group_name=resource_group, top=top
)
)
for zone in zones:
result[zone["name"]] = zone
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
result = {"error": str(exc)}
return result
@_deprecation_message
def zones_list(top=None, **kwargs):
"""
.. versionadded:: 3000
Lists the DNS zones in all resource groups in a subscription.
:param top:
The maximum number of DNS zones to return. If not specified,
eturns up to 100 zones.
CLI Example:
.. code-block:: bash
salt-call azurearm_dns.zones_list
"""
result = {}
dnsconn = __utils__["azurearm.get_client"]("dns", **kwargs)
try:
zones = __utils__["azurearm.paged_object_to_list"](dnsconn.zones.list(top=top))
for zone in zones:
result[zone["name"]] = zone
except CloudError as exc:
__utils__["azurearm.log_cloud_error"]("dns", str(exc), **kwargs)
result = {"error": str(exc)}
return result

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,465 +0,0 @@
"""
Use Azure Blob as a Pillar source.
.. versionadded:: 3001
:maintainer: <devops@eitr.tech>
:maturity: new
:depends:
* `azure-storage-blob <https://pypi.org/project/azure-storage-blob/>`_ >= 12.0.0
The Azure Blob ext_pillar can be configured with the following parameters:
.. code-block:: yaml
ext_pillar:
- azureblob:
container: 'test_container'
connection_string: 'connection_string'
multiple_env: False
environment: 'base'
blob_cache_expire: 30
blob_sync_on_update: True
:param container: The name of the target Azure Blob Container.
:param connection_string: The connection string to use to access the specified Azure Blob Container.
:param multiple_env: Specifies whether the pillar should interpret top level folders as pillar environments.
Defaults to false.
:param environment: Specifies which environment the container represents when in single environment mode. Defaults
to 'base' and is ignored if multiple_env is set as True.
:param blob_cache_expire: Specifies expiration time of the Azure Blob metadata cache file. Defaults to 30s.
:param blob_sync_on_update: Specifies if the cache is synced on update. Defaults to True.
"""
import logging
import os
import pickle
import time
from copy import deepcopy
import salt.utils.files
import salt.utils.hashutils
from salt.pillar import Pillar
HAS_LIBS = False
try:
# pylint: disable=no-name-in-module
from azure.storage.blob import BlobServiceClient
# pylint: enable=no-name-in-module
HAS_LIBS = True
except ImportError:
pass
__virtualname__ = "azureblob"
# Set up logging
log = logging.getLogger(__name__)
def __virtual__():
if not HAS_LIBS:
return (
False,
"The following dependency is required to use the Azure Blob ext_pillar: "
"Microsoft Azure Storage Blob >= 12.0.0 ",
)
return __virtualname__
def ext_pillar(
minion_id,
pillar, # pylint: disable=W0613
container,
connection_string,
multiple_env=False,
environment="base",
blob_cache_expire=30,
blob_sync_on_update=True,
):
"""
Execute a command and read the output as YAML.
:param container: The name of the target Azure Blob Container.
:param connection_string: The connection string to use to access the specified Azure Blob Container.
:param multiple_env: Specifies whether the pillar should interpret top level folders as pillar environments.
Defaults to false.
:param environment: Specifies which environment the container represents when in single environment mode. Defaults
to 'base' and is ignored if multiple_env is set as True.
:param blob_cache_expire: Specifies expiration time of the Azure Blob metadata cache file. Defaults to 30s.
:param blob_sync_on_update: Specifies if the cache is synced on update. Defaults to True.
"""
# normpath is needed to remove appended '/' if root is empty string.
pillar_dir = os.path.normpath(
os.path.join(_get_cache_dir(), environment, container)
)
if __opts__["pillar_roots"].get(environment, []) == [pillar_dir]:
return {}
metadata = _init(
connection_string, container, multiple_env, environment, blob_cache_expire
)
log.debug("Blob metadata: %s", metadata)
if blob_sync_on_update:
# sync the containers to the local cache
log.info("Syncing local pillar cache from Azure Blob...")
for saltenv, env_meta in metadata.items():
for container, files in _find_files(env_meta).items():
for file_path in files:
cached_file_path = _get_cached_file_name(
container, saltenv, file_path
)
log.info("%s - %s : %s", container, saltenv, file_path)
# load the file from Azure Blob if not in the cache or too old
_get_file_from_blob(
connection_string,
metadata,
saltenv,
container,
file_path,
cached_file_path,
)
log.info("Sync local pillar cache from Azure Blob completed.")
opts = deepcopy(__opts__)
opts["pillar_roots"][environment] = (
[os.path.join(pillar_dir, environment)] if multiple_env else [pillar_dir]
)
# Avoid recursively re-adding this same pillar
opts["ext_pillar"] = [x for x in opts["ext_pillar"] if "azureblob" not in x]
pil = Pillar(opts, __grains__, minion_id, environment)
compiled_pillar = pil.compile_pillar(ext=False)
return compiled_pillar
def _init(connection_string, container, multiple_env, environment, blob_cache_expire):
"""
.. versionadded:: 3001
Connect to Blob Storage and download the metadata for each file in all containers specified and
cache the data to disk.
:param connection_string: The connection string to use to access the specified Azure Blob Container.
:param container: The name of the target Azure Blob Container.
:param multiple_env: Specifies whether the pillar should interpret top level folders as pillar environments.
Defaults to false.
:param environment: Specifies which environment the container represents when in single environment mode. Defaults
to 'base' and is ignored if multiple_env is set as True.
:param blob_cache_expire: Specifies expiration time of the Azure Blob metadata cache file. Defaults to 30s.
"""
cache_file = _get_containers_cache_filename(container)
exp = time.time() - blob_cache_expire
# Check if cache_file exists and its mtime
if os.path.isfile(cache_file):
cache_file_mtime = os.path.getmtime(cache_file)
else:
# If the file does not exist then set mtime to 0 (aka epoch)
cache_file_mtime = 0
expired = cache_file_mtime <= exp
log.debug(
"Blob storage container cache file %s is %sexpired, mtime_diff=%ss,"
" expiration=%ss",
cache_file,
"" if expired else "not ",
cache_file_mtime - exp,
blob_cache_expire,
)
if expired:
pillars = _refresh_containers_cache_file(
connection_string, container, cache_file, multiple_env, environment
)
else:
pillars = _read_containers_cache_file(cache_file)
log.debug("Blob container retrieved pillars %s", pillars)
return pillars
def _get_cache_dir():
"""
.. versionadded:: 3001
Get pillar cache directory. Initialize it if it does not exist.
"""
cache_dir = os.path.join(__opts__["cachedir"], "pillar_azureblob")
if not os.path.isdir(cache_dir):
log.debug("Initializing Azure Blob Pillar Cache")
os.makedirs(cache_dir)
return cache_dir
def _get_cached_file_name(container, saltenv, path):
"""
.. versionadded:: 3001
Return the cached file name for a container path file.
:param container: The name of the target Azure Blob Container.
:param saltenv: Specifies which environment the container represents.
:param path: The path of the file in the container.
"""
file_path = os.path.join(_get_cache_dir(), saltenv, container, path)
# make sure container and saltenv directories exist
if not os.path.exists(os.path.dirname(file_path)):
os.makedirs(os.path.dirname(file_path))
return file_path
def _get_containers_cache_filename(container):
"""
.. versionadded:: 3001
Return the filename of the cache for container contents. Create the path if it does not exist.
:param container: The name of the target Azure Blob Container.
"""
cache_dir = _get_cache_dir()
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
return os.path.join(cache_dir, "{}-files.cache".format(container))
def _refresh_containers_cache_file(
connection_string, container, cache_file, multiple_env=False, environment="base"
):
"""
.. versionadded:: 3001
Downloads the entire contents of an Azure storage container to the local filesystem.
:param connection_string: The connection string to use to access the specified Azure Blob Container.
:param container: The name of the target Azure Blob Container.
:param cache_file: The path of where the file will be cached.
:param multiple_env: Specifies whether the pillar should interpret top level folders as pillar environments.
:param environment: Specifies which environment the container represents when in single environment mode. This is
ignored if multiple_env is set as True.
"""
try:
# Create the BlobServiceClient object which will be used to create a container client
blob_service_client = BlobServiceClient.from_connection_string(
connection_string
)
# Create the ContainerClient object
container_client = blob_service_client.get_container_client(container)
except Exception as exc: # pylint: disable=broad-except
log.error("Exception: %s", exc)
return False
metadata = {}
def _walk_blobs(saltenv="base", prefix=None):
# Walk the blobs in the container with a generator
blob_list = container_client.walk_blobs(name_starts_with=prefix)
# Iterate over the generator
while True:
try:
blob = next(blob_list)
except StopIteration:
break
log.debug("Raw blob attributes: %s", blob)
# Directories end with "/".
if blob.name.endswith("/"):
# Recurse into the directory
_walk_blobs(prefix=blob.name)
continue
if multiple_env:
saltenv = "base" if (not prefix or prefix == ".") else prefix[:-1]
if saltenv not in metadata:
metadata[saltenv] = {}
if container not in metadata[saltenv]:
metadata[saltenv][container] = []
metadata[saltenv][container].append(blob)
_walk_blobs(saltenv=environment)
# write the metadata to disk
if os.path.isfile(cache_file):
os.remove(cache_file)
log.debug("Writing Azure blobs pillar cache file")
with salt.utils.files.fopen(cache_file, "wb") as fp_:
pickle.dump(metadata, fp_)
return metadata
def _read_containers_cache_file(cache_file):
"""
.. versionadded:: 3001
Return the contents of the containers cache file.
:param cache_file: The path for where the file will be cached.
"""
log.debug("Reading containers cache file")
with salt.utils.files.fopen(cache_file, "rb") as fp_:
data = pickle.load(fp_)
return data
def _find_files(metadata):
"""
.. versionadded:: 3001
Looks for all the files in the Azure Blob container cache metadata.
:param metadata: The metadata for the container files.
"""
ret = {}
for container, data in metadata.items():
if container not in ret:
ret[container] = []
# grab the paths from the metadata
file_paths = [k["name"] for k in data]
# filter out the dirs
ret[container] += [k for k in file_paths if not k.endswith("/")]
return ret
def _find_file_meta(metadata, container, saltenv, path):
"""
.. versionadded:: 3001
Looks for a file's metadata in the Azure Blob Container cache file.
:param metadata: The metadata for the container files.
:param container: The name of the target Azure Blob Container.
:param saltenv: Specifies which environment the container represents.
:param path: The path of the file in the container.
"""
env_meta = metadata[saltenv] if saltenv in metadata else {}
container_meta = env_meta[container] if container in env_meta else {}
for item_meta in container_meta:
item_meta = dict(item_meta)
if "name" in item_meta and item_meta["name"] == path:
return item_meta
def _get_file_from_blob(
connection_string, metadata, saltenv, container, path, cached_file_path
):
"""
.. versionadded:: 3001
Downloads the entire contents of an Azure storage container to the local filesystem.
:param connection_string: The connection string to use to access the specified Azure Blob Container.
:param metadata: The metadata for the container files.
:param saltenv: Specifies which environment the container represents when in single environment mode. This is
ignored if multiple_env is set as True.
:param container: The name of the target Azure Blob Container.
:param path: The path of the file in the container.
:param cached_file_path: The path of where the file will be cached.
"""
# check the local cache...
if os.path.isfile(cached_file_path):
file_meta = _find_file_meta(metadata, container, saltenv, path)
file_md5 = (
"".join(list(filter(str.isalnum, file_meta["etag"]))) if file_meta else None
)
cached_md5 = salt.utils.hashutils.get_hash(cached_file_path, "md5")
# hashes match we have a cache hit
log.debug(
"Cached file: path=%s, md5=%s, etag=%s",
cached_file_path,
cached_md5,
file_md5,
)
if cached_md5 == file_md5:
return
try:
# Create the BlobServiceClient object which will be used to create a container client
blob_service_client = BlobServiceClient.from_connection_string(
connection_string
)
# Create the ContainerClient object
container_client = blob_service_client.get_container_client(container)
# Create the BlobClient object
blob_client = container_client.get_blob_client(path)
except Exception as exc: # pylint: disable=broad-except
log.error("Exception: %s", exc)
return False
with salt.utils.files.fopen(cached_file_path, "wb") as outfile:
outfile.write(blob_client.download_blob().readall())
return

View file

@ -1,362 +0,0 @@
"""
Azure (ARM) Compute State Module
.. versionadded:: 2019.2.0
.. warning::
This cloud provider will be removed from Salt in version 3007 in favor of
the `saltext.azurerm Salt Extension
<https://github.com/salt-extensions/saltext-azurerm>`_
:maintainer: <devops@eitr.tech>
:maturity: new
:depends:
* `azure <https://pypi.python.org/pypi/azure>`_ >= 2.0.0
* `azure-common <https://pypi.python.org/pypi/azure-common>`_ >= 1.1.8
* `azure-mgmt <https://pypi.python.org/pypi/azure-mgmt>`_ >= 1.0.0
* `azure-mgmt-compute <https://pypi.python.org/pypi/azure-mgmt-compute>`_ >= 1.0.0
* `azure-mgmt-network <https://pypi.python.org/pypi/azure-mgmt-network>`_ >= 1.7.1
* `azure-mgmt-resource <https://pypi.python.org/pypi/azure-mgmt-resource>`_ >= 1.1.0
* `azure-mgmt-storage <https://pypi.python.org/pypi/azure-mgmt-storage>`_ >= 1.0.0
* `azure-mgmt-web <https://pypi.python.org/pypi/azure-mgmt-web>`_ >= 0.32.0
* `azure-storage <https://pypi.python.org/pypi/azure-storage>`_ >= 0.34.3
* `msrestazure <https://pypi.python.org/pypi/msrestazure>`_ >= 0.4.21
:platform: linux
:configuration: This module requires Azure Resource Manager credentials to be passed as a dictionary of
keyword arguments to the ``connection_auth`` parameter in order to work properly. Since the authentication
parameters are sensitive, it's recommended to pass them to the states via pillar.
Required provider parameters:
if using username and password:
* ``subscription_id``
* ``username``
* ``password``
if using a service principal:
* ``subscription_id``
* ``tenant``
* ``client_id``
* ``secret``
Optional provider parameters:
**cloud_environment**: Used to point the cloud driver to different API endpoints, such as Azure GovCloud. Possible values:
* ``AZURE_PUBLIC_CLOUD`` (default)
* ``AZURE_CHINA_CLOUD``
* ``AZURE_US_GOV_CLOUD``
* ``AZURE_GERMAN_CLOUD``
Example Pillar for Azure Resource Manager authentication:
.. code-block:: yaml
azurearm:
user_pass_auth:
subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617
username: fletch
password: 123pass
mysubscription:
subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617
tenant: ABCDEFAB-1234-ABCD-1234-ABCDEFABCDEF
client_id: ABCDEFAB-1234-ABCD-1234-ABCDEFABCDEF
secret: XXXXXXXXXXXXXXXXXXXXXXXX
cloud_environment: AZURE_PUBLIC_CLOUD
Example states using Azure Resource Manager authentication:
.. code-block:: jinja
{% set profile = salt['pillar.get']('azurearm:mysubscription') %}
Ensure availability set exists:
azurearm_compute.availability_set_present:
- name: my_avail_set
- resource_group: my_rg
- virtual_machines:
- my_vm1
- my_vm2
- tags:
how_awesome: very
contact_name: Elmer Fudd Gantry
- connection_auth: {{ profile }}
Ensure availability set is absent:
azurearm_compute.availability_set_absent:
- name: other_avail_set
- resource_group: my_rg
- connection_auth: {{ profile }}
"""
# Python libs
import logging
from functools import wraps
import salt.utils.azurearm
__virtualname__ = "azurearm_compute"
log = logging.getLogger(__name__)
def __virtual__():
"""
Only make this state available if the azurearm_compute module is available.
"""
if "azurearm_compute.availability_set_create_or_update" in __salt__:
return __virtualname__
return (False, "azurearm module could not be loaded")
def _deprecation_message(function):
"""
Decorator wrapper to warn about azurearm deprecation
"""
@wraps(function)
def wrapped(*args, **kwargs):
salt.utils.versions.warn_until(
"Chlorine",
"The 'azurearm' functionality in Salt has been deprecated and its "
"functionality will be removed in version 3007 in favor of the "
"saltext.azurerm Salt Extension. "
"(https://github.com/salt-extensions/saltext-azurerm)",
category=FutureWarning,
)
ret = function(*args, **salt.utils.args.clean_kwargs(**kwargs))
return ret
return wrapped
@_deprecation_message
def availability_set_present(
name,
resource_group,
tags=None,
platform_update_domain_count=None,
platform_fault_domain_count=None,
virtual_machines=None,
sku=None,
connection_auth=None,
**kwargs
):
"""
.. versionadded:: 2019.2.0
Ensure an availability set exists.
:param name:
Name of the availability set.
:param resource_group:
The resource group assigned to the availability set.
:param tags:
A dictionary of strings can be passed as tag metadata to the availability set object.
:param platform_update_domain_count:
An optional parameter which indicates groups of virtual machines and underlying physical hardware that can be
rebooted at the same time.
:param platform_fault_domain_count:
An optional parameter which defines the group of virtual machines that share a common power source and network
switch.
:param virtual_machines:
A list of names of existing virtual machines to be included in the availability set.
:param sku:
The availability set SKU, which specifies whether the availability set is managed or not. Possible values are
'Aligned' or 'Classic'. An 'Aligned' availability set is managed, 'Classic' is not.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
Example usage:
.. code-block:: yaml
Ensure availability set exists:
azurearm_compute.availability_set_present:
- name: aset1
- resource_group: group1
- platform_update_domain_count: 5
- platform_fault_domain_count: 3
- sku: aligned
- tags:
contact_name: Elmer Fudd Gantry
- connection_auth: {{ profile }}
- require:
- azurearm_resource: Ensure resource group exists
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
if sku:
sku = {"name": sku.capitalize()}
aset = __salt__["azurearm_compute.availability_set_get"](
name, resource_group, azurearm_log_level="info", **connection_auth
)
if "error" not in aset:
tag_changes = __utils__["dictdiffer.deep_diff"](
aset.get("tags", {}), tags or {}
)
if tag_changes:
ret["changes"]["tags"] = tag_changes
if platform_update_domain_count and (
int(platform_update_domain_count)
!= aset.get("platform_update_domain_count")
):
ret["changes"]["platform_update_domain_count"] = {
"old": aset.get("platform_update_domain_count"),
"new": platform_update_domain_count,
}
if platform_fault_domain_count and (
int(platform_fault_domain_count) != aset.get("platform_fault_domain_count")
):
ret["changes"]["platform_fault_domain_count"] = {
"old": aset.get("platform_fault_domain_count"),
"new": platform_fault_domain_count,
}
if sku and (sku["name"] != aset.get("sku", {}).get("name")):
ret["changes"]["sku"] = {"old": aset.get("sku"), "new": sku}
if virtual_machines:
if not isinstance(virtual_machines, list):
ret["comment"] = "Virtual machines must be supplied as a list!"
return ret
aset_vms = aset.get("virtual_machines", [])
remote_vms = sorted(
vm["id"].split("/")[-1].lower() for vm in aset_vms if "id" in aset_vms
)
local_vms = sorted(vm.lower() for vm in virtual_machines or [])
if local_vms != remote_vms:
ret["changes"]["virtual_machines"] = {
"old": aset_vms,
"new": virtual_machines,
}
if not ret["changes"]:
ret["result"] = True
ret["comment"] = "Availability set {} is already present.".format(name)
return ret
if __opts__["test"]:
ret["result"] = None
ret["comment"] = "Availability set {} would be updated.".format(name)
return ret
else:
ret["changes"] = {
"old": {},
"new": {
"name": name,
"virtual_machines": virtual_machines,
"platform_update_domain_count": platform_update_domain_count,
"platform_fault_domain_count": platform_fault_domain_count,
"sku": sku,
"tags": tags,
},
}
if __opts__["test"]:
ret["comment"] = "Availability set {} would be created.".format(name)
ret["result"] = None
return ret
aset_kwargs = kwargs.copy()
aset_kwargs.update(connection_auth)
aset = __salt__["azurearm_compute.availability_set_create_or_update"](
name=name,
resource_group=resource_group,
virtual_machines=virtual_machines,
platform_update_domain_count=platform_update_domain_count,
platform_fault_domain_count=platform_fault_domain_count,
sku=sku,
tags=tags,
**aset_kwargs
)
if "error" not in aset:
ret["result"] = True
ret["comment"] = "Availability set {} has been created.".format(name)
return ret
ret["comment"] = "Failed to create availability set {}! ({})".format(
name, aset.get("error")
)
return ret
@_deprecation_message
def availability_set_absent(name, resource_group, connection_auth=None):
"""
.. versionadded:: 2019.2.0
Ensure an availability set does not exist in a resource group.
:param name:
Name of the availability set.
:param resource_group:
Name of the resource group containing the availability set.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
aset = __salt__["azurearm_compute.availability_set_get"](
name, resource_group, azurearm_log_level="info", **connection_auth
)
if "error" in aset:
ret["result"] = True
ret["comment"] = "Availability set {} was not found.".format(name)
return ret
elif __opts__["test"]:
ret["comment"] = "Availability set {} would be deleted.".format(name)
ret["result"] = None
ret["changes"] = {
"old": aset,
"new": {},
}
return ret
deleted = __salt__["azurearm_compute.availability_set_delete"](
name, resource_group, **connection_auth
)
if deleted:
ret["result"] = True
ret["comment"] = "Availability set {} has been deleted.".format(name)
ret["changes"] = {"old": aset, "new": {}}
return ret
ret["comment"] = "Failed to delete availability set {}!".format(name)
return ret

View file

@ -1,762 +0,0 @@
"""
Azure (ARM) DNS State Module
.. versionadded:: 3000
.. warning::
This cloud provider will be removed from Salt in version 3007 in favor of
the `saltext.azurerm Salt Extension
<https://github.com/salt-extensions/saltext-azurerm>`_
:maintainer: <devops@eitr.tech>
:maturity: new
:depends:
* `azure <https://pypi.python.org/pypi/azure>`_ >= 2.0.0
* `azure-common <https://pypi.python.org/pypi/azure-common>`_ >= 1.1.8
* `azure-mgmt <https://pypi.python.org/pypi/azure-mgmt>`_ >= 1.0.0
* `azure-mgmt-compute <https://pypi.python.org/pypi/azure-mgmt-compute>`_ >= 1.0.0
* `azure-mgmt-dns <https://pypi.python.org/pypi/azure-mgmt-dns>`_ >= 1.0.1
* `azure-mgmt-network <https://pypi.python.org/pypi/azure-mgmt-network>`_ >= 1.7.1
* `azure-mgmt-resource <https://pypi.python.org/pypi/azure-mgmt-resource>`_ >= 1.1.0
* `azure-mgmt-storage <https://pypi.python.org/pypi/azure-mgmt-storage>`_ >= 1.0.0
* `azure-mgmt-web <https://pypi.python.org/pypi/azure-mgmt-web>`_ >= 0.32.0
* `azure-storage <https://pypi.python.org/pypi/azure-storage>`_ >= 0.34.3
* `msrestazure <https://pypi.python.org/pypi/msrestazure>`_ >= 0.4.21
:platform: linux
:configuration:
This module requires Azure Resource Manager credentials to be passed as a dictionary of
keyword arguments to the ``connection_auth`` parameter in order to work properly. Since the authentication
parameters are sensitive, it's recommended to pass them to the states via pillar.
Required provider parameters:
if using username and password:
* ``subscription_id``
* ``username``
* ``password``
if using a service principal:
* ``subscription_id``
* ``tenant``
* ``client_id``
* ``secret``
Optional provider parameters:
**cloud_environment**: Used to point the cloud driver to different API endpoints, such as Azure GovCloud. Possible values:
Possible values:
* ``AZURE_PUBLIC_CLOUD`` (default)
* ``AZURE_CHINA_CLOUD``
* ``AZURE_US_GOV_CLOUD``
* ``AZURE_GERMAN_CLOUD``
Example Pillar for Azure Resource Manager authentication:
.. code-block:: yaml
azurearm:
user_pass_auth:
subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617
username: fletch
password: 123pass
mysubscription:
subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617
tenant: ABCDEFAB-1234-ABCD-1234-ABCDEFABCDEF
client_id: ABCDEFAB-1234-ABCD-1234-ABCDEFABCDEF
secret: XXXXXXXXXXXXXXXXXXXXXXXX
cloud_environment: AZURE_PUBLIC_CLOUD
Example states using Azure Resource Manager authentication:
.. code-block:: none
{% set profile = salt['pillar.get']('azurearm:mysubscription') %}
Ensure DNS zone exists:
azurearm_dns.zone_present:
- name: contoso.com
- resource_group: my_rg
- tags:
how_awesome: very
contact_name: Elmer Fudd Gantry
- connection_auth: {{ profile }}
Ensure DNS record set exists:
azurearm_dns.record_set_present:
- name: web
- zone_name: contoso.com
- resource_group: my_rg
- record_type: A
- ttl: 300
- arecords:
- ipv4_address: 10.0.0.1
- tags:
how_awesome: very
contact_name: Elmer Fudd Gantry
- connection_auth: {{ profile }}
Ensure DNS record set is absent:
azurearm_dns.record_set_absent:
- name: web
- zone_name: contoso.com
- resource_group: my_rg
- record_type: A
- connection_auth: {{ profile }}
Ensure DNS zone is absent:
azurearm_dns.zone_absent:
- name: contoso.com
- resource_group: my_rg
- connection_auth: {{ profile }}
"""
import logging
from functools import wraps
import salt.utils.azurearm
__virtualname__ = "azurearm_dns"
log = logging.getLogger(__name__)
def __virtual__():
"""
Only make this state available if the azurearm_dns module is available.
"""
if "azurearm_dns.zones_list_by_resource_group" in __salt__:
return __virtualname__
return (False, "azurearm_dns module could not be loaded")
def _deprecation_message(function):
"""
Decorator wrapper to warn about azurearm deprecation
"""
@wraps(function)
def wrapped(*args, **kwargs):
salt.utils.versions.warn_until(
"Chlorine",
"The 'azurearm' functionality in Salt has been deprecated and its "
"functionality will be removed in version 3007 in favor of the "
"saltext.azurerm Salt Extension. "
"(https://github.com/salt-extensions/saltext-azurerm)",
category=FutureWarning,
)
ret = function(*args, **salt.utils.args.clean_kwargs(**kwargs))
return ret
return wrapped
@_deprecation_message
def zone_present(
name,
resource_group,
etag=None,
if_match=None,
if_none_match=None,
registration_virtual_networks=None,
resolution_virtual_networks=None,
tags=None,
zone_type="Public",
connection_auth=None,
**kwargs
):
"""
.. versionadded:: 3000
Ensure a DNS zone exists.
:param name:
Name of the DNS zone (without a terminating dot).
:param resource_group:
The resource group assigned to the DNS zone.
:param etag:
The etag of the zone. `Etags <https://docs.microsoft.com/en-us/azure/dns/dns-zones-records#etags>`_ are used
to handle concurrent changes to the same resource safely.
:param if_match:
The etag of the DNS zone. Omit this value to always overwrite the current zone. Specify the last-seen etag
value to prevent accidentally overwritting any concurrent changes.
:param if_none_match:
Set to '*' to allow a new DNS zone to be created, but to prevent updating an existing zone. Other values will
be ignored.
:param registration_virtual_networks:
A list of references to virtual networks that register hostnames in this DNS zone. This is only when zone_type
is Private. (requires `azure-mgmt-dns <https://pypi.python.org/pypi/azure-mgmt-dns>`_ >= 2.0.0rc1)
:param resolution_virtual_networks:
A list of references to virtual networks that resolve records in this DNS zone. This is only when zone_type is
Private. (requires `azure-mgmt-dns <https://pypi.python.org/pypi/azure-mgmt-dns>`_ >= 2.0.0rc1)
:param tags:
A dictionary of strings can be passed as tag metadata to the DNS zone object.
:param zone_type:
The type of this DNS zone (Public or Private). Possible values include: 'Public', 'Private'. Default value: 'Public'
(requires `azure-mgmt-dns <https://pypi.python.org/pypi/azure-mgmt-dns>`_ >= 2.0.0rc1)
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
Example usage:
.. code-block:: yaml
Ensure DNS zone exists:
azurearm_dns.zone_present:
- name: contoso.com
- resource_group: my_rg
- zone_type: Private
- registration_virtual_networks:
- /subscriptions/{{ sub }}/resourceGroups/my_rg/providers/Microsoft.Network/virtualNetworks/test_vnet
- tags:
how_awesome: very
contact_name: Elmer Fudd Gantry
- connection_auth: {{ profile }}
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
zone = __salt__["azurearm_dns.zone_get"](
name, resource_group, azurearm_log_level="info", **connection_auth
)
if "error" not in zone:
tag_changes = __utils__["dictdiffer.deep_diff"](
zone.get("tags", {}), tags or {}
)
if tag_changes:
ret["changes"]["tags"] = tag_changes
# The zone_type parameter is only accessible in azure-mgmt-dns >=2.0.0rc1
if zone.get("zone_type"):
if zone.get("zone_type").lower() != zone_type.lower():
ret["changes"]["zone_type"] = {
"old": zone["zone_type"],
"new": zone_type,
}
if zone_type.lower() == "private":
# The registration_virtual_networks parameter is only accessible in azure-mgmt-dns >=2.0.0rc1
if registration_virtual_networks and not isinstance(
registration_virtual_networks, list
):
ret["comment"] = (
"registration_virtual_networks must be supplied as a list of"
" VNET ID paths!"
)
return ret
reg_vnets = zone.get("registration_virtual_networks", [])
remote_reg_vnets = sorted(
vnet["id"].lower() for vnet in reg_vnets if "id" in vnet
)
local_reg_vnets = sorted(
vnet.lower() for vnet in registration_virtual_networks or []
)
if local_reg_vnets != remote_reg_vnets:
ret["changes"]["registration_virtual_networks"] = {
"old": remote_reg_vnets,
"new": local_reg_vnets,
}
# The resolution_virtual_networks parameter is only accessible in azure-mgmt-dns >=2.0.0rc1
if resolution_virtual_networks and not isinstance(
resolution_virtual_networks, list
):
ret["comment"] = (
"resolution_virtual_networks must be supplied as a list of VNET"
" ID paths!"
)
return ret
res_vnets = zone.get("resolution_virtual_networks", [])
remote_res_vnets = sorted(
vnet["id"].lower() for vnet in res_vnets if "id" in vnet
)
local_res_vnets = sorted(
vnet.lower() for vnet in resolution_virtual_networks or []
)
if local_res_vnets != remote_res_vnets:
ret["changes"]["resolution_virtual_networks"] = {
"old": remote_res_vnets,
"new": local_res_vnets,
}
if not ret["changes"]:
ret["result"] = True
ret["comment"] = "DNS zone {} is already present.".format(name)
return ret
if __opts__["test"]:
ret["result"] = None
ret["comment"] = "DNS zone {} would be updated.".format(name)
return ret
else:
ret["changes"] = {
"old": {},
"new": {
"name": name,
"resource_group": resource_group,
"etag": etag,
"registration_virtual_networks": registration_virtual_networks,
"resolution_virtual_networks": resolution_virtual_networks,
"tags": tags,
"zone_type": zone_type,
},
}
if __opts__["test"]:
ret["comment"] = "DNS zone {} would be created.".format(name)
ret["result"] = None
return ret
zone_kwargs = kwargs.copy()
zone_kwargs.update(connection_auth)
zone = __salt__["azurearm_dns.zone_create_or_update"](
name=name,
resource_group=resource_group,
etag=etag,
if_match=if_match,
if_none_match=if_none_match,
registration_virtual_networks=registration_virtual_networks,
resolution_virtual_networks=resolution_virtual_networks,
tags=tags,
zone_type=zone_type,
**zone_kwargs
)
if "error" not in zone:
ret["result"] = True
ret["comment"] = "DNS zone {} has been created.".format(name)
return ret
ret["comment"] = "Failed to create DNS zone {}! ({})".format(
name, zone.get("error")
)
return ret
@_deprecation_message
def zone_absent(name, resource_group, connection_auth=None):
"""
.. versionadded:: 3000
Ensure a DNS zone does not exist in the resource group.
:param name:
Name of the DNS zone.
:param resource_group:
The resource group assigned to the DNS zone.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
zone = __salt__["azurearm_dns.zone_get"](
name, resource_group, azurearm_log_level="info", **connection_auth
)
if "error" in zone:
ret["result"] = True
ret["comment"] = "DNS zone {} was not found.".format(name)
return ret
elif __opts__["test"]:
ret["comment"] = "DNS zone {} would be deleted.".format(name)
ret["result"] = None
ret["changes"] = {
"old": zone,
"new": {},
}
return ret
deleted = __salt__["azurearm_dns.zone_delete"](
name, resource_group, **connection_auth
)
if deleted:
ret["result"] = True
ret["comment"] = "DNS zone {} has been deleted.".format(name)
ret["changes"] = {"old": zone, "new": {}}
return ret
ret["comment"] = "Failed to delete DNS zone {}!".format(name)
return ret
@_deprecation_message
def record_set_present(
name,
zone_name,
resource_group,
record_type,
if_match=None,
if_none_match=None,
etag=None,
metadata=None,
ttl=None,
arecords=None,
aaaa_records=None,
mx_records=None,
ns_records=None,
ptr_records=None,
srv_records=None,
txt_records=None,
cname_record=None,
soa_record=None,
caa_records=None,
connection_auth=None,
**kwargs
):
"""
.. versionadded:: 3000
Ensure a record set exists in a DNS zone.
:param name:
The name of the record set, relative to the name of the zone.
:param zone_name:
Name of the DNS zone (without a terminating dot).
:param resource_group:
The resource group assigned to the DNS zone.
:param record_type:
The type of DNS record in this record set. Record sets of type SOA can be updated but not created
(they are created when the DNS zone is created). Possible values include: 'A', 'AAAA', 'CAA', 'CNAME',
'MX', 'NS', 'PTR', 'SOA', 'SRV', 'TXT'
:param if_match:
The etag of the record set. Omit this value to always overwrite the current record set. Specify the last-seen
etag value to prevent accidentally overwritting any concurrent changes.
:param if_none_match:
Set to '*' to allow a new record set to be created, but to prevent updating an existing record set. Other values
will be ignored.
:param etag:
The etag of the record set. `Etags <https://docs.microsoft.com/en-us/azure/dns/dns-zones-records#etags>`__ are
used to handle concurrent changes to the same resource safely.
:param metadata:
A dictionary of strings can be passed as tag metadata to the record set object.
:param ttl:
The TTL (time-to-live) of the records in the record set. Required when specifying record information.
:param arecords:
The list of A records in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.arecord?view=azure-python>`__
to create a list of dictionaries representing the record objects.
:param aaaa_records:
The list of AAAA records in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.aaaarecord?view=azure-python>`__
to create a list of dictionaries representing the record objects.
:param mx_records:
The list of MX records in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.mxrecord?view=azure-python>`__
to create a list of dictionaries representing the record objects.
:param ns_records:
The list of NS records in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.nsrecord?view=azure-python>`__
to create a list of dictionaries representing the record objects.
:param ptr_records:
The list of PTR records in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.ptrrecord?view=azure-python>`__
to create a list of dictionaries representing the record objects.
:param srv_records:
The list of SRV records in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.srvrecord?view=azure-python>`__
to create a list of dictionaries representing the record objects.
:param txt_records:
The list of TXT records in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.txtrecord?view=azure-python>`__
to create a list of dictionaries representing the record objects.
:param cname_record:
The CNAME record in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.cnamerecord?view=azure-python>`__
to create a dictionary representing the record object.
:param soa_record:
The SOA record in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.soarecord?view=azure-python>`__
to create a dictionary representing the record object.
:param caa_records:
The list of CAA records in the record set. View the
`Azure SDK documentation <https://docs.microsoft.com/en-us/python/api/azure.mgmt.dns.models.caarecord?view=azure-python>`__
to create a list of dictionaries representing the record objects.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
Example usage:
.. code-block:: yaml
Ensure record set exists:
azurearm_dns.record_set_present:
- name: web
- zone_name: contoso.com
- resource_group: my_rg
- record_type: A
- ttl: 300
- arecords:
- ipv4_address: 10.0.0.1
- metadata:
how_awesome: very
contact_name: Elmer Fudd Gantry
- connection_auth: {{ profile }}
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
record_vars = [
"arecords",
"aaaa_records",
"mx_records",
"ns_records",
"ptr_records",
"srv_records",
"txt_records",
"cname_record",
"soa_record",
"caa_records",
]
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
rec_set = __salt__["azurearm_dns.record_set_get"](
name,
zone_name,
resource_group,
record_type,
azurearm_log_level="info",
**connection_auth
)
if "error" not in rec_set:
metadata_changes = __utils__["dictdiffer.deep_diff"](
rec_set.get("metadata", {}), metadata or {}
)
if metadata_changes:
ret["changes"]["metadata"] = metadata_changes
for record_str in record_vars:
# pylint: disable=eval-used
record = eval(record_str)
if record:
if not ttl:
ret[
"comment"
] = "TTL is required when specifying record information!"
return ret
if not rec_set.get(record_str):
ret["changes"] = {"new": {record_str: record}}
continue
if record_str[-1] != "s":
if not isinstance(record, dict):
ret[
"comment"
] = "{} record information must be specified as a dictionary!".format(
record_str
)
return ret
for k, v in record.items():
if v != rec_set[record_str].get(k):
ret["changes"] = {"new": {record_str: record}}
elif record_str[-1] == "s":
if not isinstance(record, list):
ret["comment"] = (
"{} record information must be specified as a list of"
" dictionaries!".format(record_str)
)
return ret
local, remote = (
sorted(config) for config in (record, rec_set[record_str])
)
for val in local:
for key in val:
local_val = val[key]
remote_val = remote.get(key)
if isinstance(local_val, str):
local_val = local_val.lower()
if isinstance(remote_val, str):
remote_val = remote_val.lower()
if local_val != remote_val:
ret["changes"] = {"new": {record_str: record}}
if not ret["changes"]:
ret["result"] = True
ret["comment"] = "Record set {} is already present.".format(name)
return ret
if __opts__["test"]:
ret["result"] = None
ret["comment"] = "Record set {} would be updated.".format(name)
return ret
else:
ret["changes"] = {
"old": {},
"new": {
"name": name,
"zone_name": zone_name,
"resource_group": resource_group,
"record_type": record_type,
"etag": etag,
"metadata": metadata,
"ttl": ttl,
},
}
for record in record_vars:
# pylint: disable=eval-used
if eval(record):
# pylint: disable=eval-used
ret["changes"]["new"][record] = eval(record)
if __opts__["test"]:
ret["comment"] = "Record set {} would be created.".format(name)
ret["result"] = None
return ret
rec_set_kwargs = kwargs.copy()
rec_set_kwargs.update(connection_auth)
rec_set = __salt__["azurearm_dns.record_set_create_or_update"](
name=name,
zone_name=zone_name,
resource_group=resource_group,
record_type=record_type,
if_match=if_match,
if_none_match=if_none_match,
etag=etag,
ttl=ttl,
metadata=metadata,
arecords=arecords,
aaaa_records=aaaa_records,
mx_records=mx_records,
ns_records=ns_records,
ptr_records=ptr_records,
srv_records=srv_records,
txt_records=txt_records,
cname_record=cname_record,
soa_record=soa_record,
caa_records=caa_records,
**rec_set_kwargs
)
if "error" not in rec_set:
ret["result"] = True
ret["comment"] = "Record set {} has been created.".format(name)
return ret
ret["comment"] = "Failed to create record set {}! ({})".format(
name, rec_set.get("error")
)
return ret
@_deprecation_message
def record_set_absent(name, zone_name, resource_group, connection_auth=None):
"""
.. versionadded:: 3000
Ensure a record set does not exist in the DNS zone.
:param name:
Name of the record set.
:param zone_name:
Name of the DNS zone.
:param resource_group:
The resource group assigned to the DNS zone.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
rec_set = __salt__["azurearm_dns.record_set_get"](
name, zone_name, resource_group, azurearm_log_level="info", **connection_auth
)
if "error" in rec_set:
ret["result"] = True
ret["comment"] = "Record set {} was not found in zone {}.".format(
name, zone_name
)
return ret
elif __opts__["test"]:
ret["comment"] = "Record set {} would be deleted.".format(name)
ret["result"] = None
ret["changes"] = {
"old": rec_set,
"new": {},
}
return ret
deleted = __salt__["azurearm_dns.record_set_delete"](
name, zone_name, resource_group, **connection_auth
)
if deleted:
ret["result"] = True
ret["comment"] = "Record set {} has been deleted.".format(name)
ret["changes"] = {"old": rec_set, "new": {}}
return ret
ret["comment"] = "Failed to delete record set {}!".format(name)
return ret

File diff suppressed because it is too large Load diff

View file

@ -1,880 +0,0 @@
"""
Azure (ARM) Resource State Module
.. versionadded:: 2019.2.0
.. warning::
This cloud provider will be removed from Salt in version 3007 in favor of
the `saltext.azurerm Salt Extension
<https://github.com/salt-extensions/saltext-azurerm>`_
:maintainer: <devops@eitr.tech>
:maturity: new
:depends:
* `azure <https://pypi.python.org/pypi/azure>`_ >= 2.0.0
* `azure-common <https://pypi.python.org/pypi/azure-common>`_ >= 1.1.8
* `azure-mgmt <https://pypi.python.org/pypi/azure-mgmt>`_ >= 1.0.0
* `azure-mgmt-compute <https://pypi.python.org/pypi/azure-mgmt-compute>`_ >= 1.0.0
* `azure-mgmt-network <https://pypi.python.org/pypi/azure-mgmt-network>`_ >= 1.7.1
* `azure-mgmt-resource <https://pypi.python.org/pypi/azure-mgmt-resource>`_ >= 1.1.0
* `azure-mgmt-storage <https://pypi.python.org/pypi/azure-mgmt-storage>`_ >= 1.0.0
* `azure-mgmt-web <https://pypi.python.org/pypi/azure-mgmt-web>`_ >= 0.32.0
* `azure-storage <https://pypi.python.org/pypi/azure-storage>`_ >= 0.34.3
* `msrestazure <https://pypi.python.org/pypi/msrestazure>`_ >= 0.4.21
:platform: linux
:configuration: This module requires Azure Resource Manager credentials to be passed as a dictionary of
keyword arguments to the ``connection_auth`` parameter in order to work properly. Since the authentication
parameters are sensitive, it's recommended to pass them to the states via pillar.
Required provider parameters:
if using username and password:
* ``subscription_id``
* ``username``
* ``password``
if using a service principal:
* ``subscription_id``
* ``tenant``
* ``client_id``
* ``secret``
Optional provider parameters:
**cloud_environment**: Used to point the cloud driver to different API endpoints, such as Azure GovCloud. Possible values:
* ``AZURE_PUBLIC_CLOUD`` (default)
* ``AZURE_CHINA_CLOUD``
* ``AZURE_US_GOV_CLOUD``
* ``AZURE_GERMAN_CLOUD``
Example Pillar for Azure Resource Manager authentication:
.. code-block:: yaml
azurearm:
user_pass_auth:
subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617
username: fletch
password: 123pass
mysubscription:
subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617
tenant: ABCDEFAB-1234-ABCD-1234-ABCDEFABCDEF
client_id: ABCDEFAB-1234-ABCD-1234-ABCDEFABCDEF
secret: XXXXXXXXXXXXXXXXXXXXXXXX
cloud_environment: AZURE_PUBLIC_CLOUD
Example states using Azure Resource Manager authentication:
.. code-block:: jinja
{% set profile = salt['pillar.get']('azurearm:mysubscription') %}
Ensure resource group exists:
azurearm_resource.resource_group_present:
- name: my_rg
- location: westus
- tags:
how_awesome: very
contact_name: Elmer Fudd Gantry
- connection_auth: {{ profile }}
Ensure resource group is absent:
azurearm_resource.resource_group_absent:
- name: other_rg
- connection_auth: {{ profile }}
"""
import json
import logging
from functools import wraps
import salt.utils.azurearm
import salt.utils.files
__virtualname__ = "azurearm_resource"
log = logging.getLogger(__name__)
def __virtual__():
"""
Only make this state available if the azurearm_resource module is available.
"""
if "azurearm_resource.resource_group_check_existence" in __salt__:
return __virtualname__
return (False, "azurearm_resource module could not be loaded")
def _deprecation_message(function):
"""
Decorator wrapper to warn about azurearm deprecation
"""
@wraps(function)
def wrapped(*args, **kwargs):
salt.utils.versions.warn_until(
"Chlorine",
"The 'azurearm' functionality in Salt has been deprecated and its "
"functionality will be removed in version 3007 in favor of the "
"saltext.azurerm Salt Extension. "
"(https://github.com/salt-extensions/saltext-azurerm)",
category=FutureWarning,
)
ret = function(*args, **salt.utils.args.clean_kwargs(**kwargs))
return ret
return wrapped
@_deprecation_message
def resource_group_present(
name, location, managed_by=None, tags=None, connection_auth=None, **kwargs
):
"""
.. versionadded:: 2019.2.0
Ensure a resource group exists.
:param name:
Name of the resource group.
:param location:
The Azure location in which to create the resource group. This value cannot be updated once
the resource group is created.
:param managed_by:
The ID of the resource that manages this resource group. This value cannot be updated once
the resource group is created.
:param tags:
A dictionary of strings can be passed as tag metadata to the resource group object.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
Example usage:
.. code-block:: yaml
Ensure resource group exists:
azurearm_resource.resource_group_present:
- name: group1
- location: eastus
- tags:
contact_name: Elmer Fudd Gantry
- connection_auth: {{ profile }}
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
group = {}
present = __salt__["azurearm_resource.resource_group_check_existence"](
name, **connection_auth
)
if present:
group = __salt__["azurearm_resource.resource_group_get"](
name, **connection_auth
)
ret["changes"] = __utils__["dictdiffer.deep_diff"](
group.get("tags", {}), tags or {}
)
if not ret["changes"]:
ret["result"] = True
ret["comment"] = "Resource group {} is already present.".format(name)
return ret
if __opts__["test"]:
ret["comment"] = "Resource group {} tags would be updated.".format(name)
ret["result"] = None
ret["changes"] = {"old": group.get("tags", {}), "new": tags}
return ret
elif __opts__["test"]:
ret["comment"] = "Resource group {} would be created.".format(name)
ret["result"] = None
ret["changes"] = {
"old": {},
"new": {
"name": name,
"location": location,
"managed_by": managed_by,
"tags": tags,
},
}
return ret
group_kwargs = kwargs.copy()
group_kwargs.update(connection_auth)
group = __salt__["azurearm_resource.resource_group_create_or_update"](
name, location, managed_by=managed_by, tags=tags, **group_kwargs
)
present = __salt__["azurearm_resource.resource_group_check_existence"](
name, **connection_auth
)
if present:
ret["result"] = True
ret["comment"] = "Resource group {} has been created.".format(name)
ret["changes"] = {"old": {}, "new": group}
return ret
ret["comment"] = "Failed to create resource group {}! ({})".format(
name, group.get("error")
)
return ret
@_deprecation_message
def resource_group_absent(name, connection_auth=None):
"""
.. versionadded:: 2019.2.0
Ensure a resource group does not exist in the current subscription.
:param name:
Name of the resource group.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
group = {}
present = __salt__["azurearm_resource.resource_group_check_existence"](
name, **connection_auth
)
if not present:
ret["result"] = True
ret["comment"] = "Resource group {} is already absent.".format(name)
return ret
elif __opts__["test"]:
group = __salt__["azurearm_resource.resource_group_get"](
name, **connection_auth
)
ret["comment"] = "Resource group {} would be deleted.".format(name)
ret["result"] = None
ret["changes"] = {
"old": group,
"new": {},
}
return ret
group = __salt__["azurearm_resource.resource_group_get"](name, **connection_auth)
deleted = __salt__["azurearm_resource.resource_group_delete"](
name, **connection_auth
)
if deleted:
present = False
else:
present = __salt__["azurearm_resource.resource_group_check_existence"](
name, **connection_auth
)
if not present:
ret["result"] = True
ret["comment"] = "Resource group {} has been deleted.".format(name)
ret["changes"] = {"old": group, "new": {}}
return ret
ret["comment"] = "Failed to delete resource group {}!".format(name)
return ret
@_deprecation_message
def policy_definition_present(
name,
policy_rule=None,
policy_type=None,
mode=None,
display_name=None,
description=None,
metadata=None,
parameters=None,
policy_rule_json=None,
policy_rule_file=None,
template="jinja",
source_hash=None,
source_hash_name=None,
skip_verify=False,
connection_auth=None,
**kwargs
):
"""
.. versionadded:: 2019.2.0
Ensure a security policy definition exists.
:param name:
Name of the policy definition.
:param policy_rule:
A YAML dictionary defining the policy rule. See `Azure Policy Definition documentation
<https://docs.microsoft.com/en-us/azure/azure-policy/policy-definition#policy-rule>`_ for details on the
structure. One of ``policy_rule``, ``policy_rule_json``, or ``policy_rule_file`` is required, in that order of
precedence for use if multiple parameters are used.
:param policy_rule_json:
A text field defining the entirety of a policy definition in JSON. See `Azure Policy Definition documentation
<https://docs.microsoft.com/en-us/azure/azure-policy/policy-definition#policy-rule>`_ for details on the
structure. One of ``policy_rule``, ``policy_rule_json``, or ``policy_rule_file`` is required, in that order of
precedence for use if multiple parameters are used. Note that the `name` field in the JSON will override the
``name`` parameter in the state.
:param policy_rule_file:
The source of a JSON file defining the entirety of a policy definition. See `Azure Policy Definition
documentation <https://docs.microsoft.com/en-us/azure/azure-policy/policy-definition#policy-rule>`_ for
details on the structure. One of ``policy_rule``, ``policy_rule_json``, or ``policy_rule_file`` is required,
in that order of precedence for use if multiple parameters are used. Note that the `name` field in the JSON
will override the ``name`` parameter in the state.
:param skip_verify:
Used for the ``policy_rule_file`` parameter. If ``True``, hash verification of remote file sources
(``http://``, ``https://``, ``ftp://``) will be skipped, and the ``source_hash`` argument will be ignored.
:param source_hash:
This can be a source hash string or the URI of a file that contains source hash strings.
:param source_hash_name:
When ``source_hash`` refers to a hash file, Salt will try to find the correct hash by matching the
filename/URI associated with that hash.
:param policy_type:
The type of policy definition. Possible values are NotSpecified, BuiltIn, and Custom. Only used with the
``policy_rule`` parameter.
:param mode:
The policy definition mode. Possible values are NotSpecified, Indexed, and All. Only used with the
``policy_rule`` parameter.
:param display_name:
The display name of the policy definition. Only used with the ``policy_rule`` parameter.
:param description:
The policy definition description. Only used with the ``policy_rule`` parameter.
:param metadata:
The policy definition metadata defined as a dictionary. Only used with the ``policy_rule`` parameter.
:param parameters:
Required dictionary if a parameter is used in the policy rule. Only used with the ``policy_rule`` parameter.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
Example usage:
.. code-block:: yaml
Ensure policy definition exists:
azurearm_resource.policy_definition_present:
- name: testpolicy
- display_name: Test Policy
- description: Test policy for testing policies.
- policy_rule:
if:
allOf:
- equals: Microsoft.Compute/virtualMachines/write
source: action
- field: location
in:
- eastus
- eastus2
- centralus
then:
effect: deny
- connection_auth: {{ profile }}
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
if not policy_rule and not policy_rule_json and not policy_rule_file:
ret["comment"] = (
'One of "policy_rule", "policy_rule_json", or "policy_rule_file" is'
" required!"
)
return ret
if (
sum(x is not None for x in [policy_rule, policy_rule_json, policy_rule_file])
> 1
):
ret["comment"] = (
'Only one of "policy_rule", "policy_rule_json", or "policy_rule_file" is'
" allowed!"
)
return ret
if (policy_rule_json or policy_rule_file) and (
policy_type or mode or display_name or description or metadata or parameters
):
ret["comment"] = (
'Policy definitions cannot be passed when "policy_rule_json" or'
' "policy_rule_file" is defined!'
)
return ret
temp_rule = {}
if policy_rule_json:
try:
temp_rule = json.loads(policy_rule_json)
except Exception as exc: # pylint: disable=broad-except
ret["comment"] = "Unable to load policy rule json! ({})".format(exc)
return ret
elif policy_rule_file:
try:
# pylint: disable=unused-variable
sfn, source_sum, comment_ = __salt__["file.get_managed"](
None,
template,
policy_rule_file,
source_hash,
source_hash_name,
None,
None,
None,
__env__,
None,
None,
skip_verify=skip_verify,
**kwargs
)
except Exception as exc: # pylint: disable=broad-except
ret["comment"] = 'Unable to locate policy rule file "{}"! ({})'.format(
policy_rule_file, exc
)
return ret
if not sfn:
ret["comment"] = 'Unable to locate policy rule file "{}"!)'.format(
policy_rule_file
)
return ret
try:
with salt.utils.files.fopen(sfn, "r") as prf:
temp_rule = json.load(prf)
except Exception as exc: # pylint: disable=broad-except
ret["comment"] = 'Unable to load policy rule file "{}"! ({})'.format(
policy_rule_file, exc
)
return ret
if sfn:
salt.utils.files.remove(sfn)
policy_name = name
if policy_rule_json or policy_rule_file:
if temp_rule.get("name"):
policy_name = temp_rule.get("name")
policy_rule = temp_rule.get("properties", {}).get("policyRule")
policy_type = temp_rule.get("properties", {}).get("policyType")
mode = temp_rule.get("properties", {}).get("mode")
display_name = temp_rule.get("properties", {}).get("displayName")
description = temp_rule.get("properties", {}).get("description")
metadata = temp_rule.get("properties", {}).get("metadata")
parameters = temp_rule.get("properties", {}).get("parameters")
policy = __salt__["azurearm_resource.policy_definition_get"](
name, azurearm_log_level="info", **connection_auth
)
if "error" not in policy:
if policy_type and policy_type.lower() != policy.get("policy_type", "").lower():
ret["changes"]["policy_type"] = {
"old": policy.get("policy_type"),
"new": policy_type,
}
if (mode or "").lower() != policy.get("mode", "").lower():
ret["changes"]["mode"] = {"old": policy.get("mode"), "new": mode}
if (display_name or "").lower() != policy.get("display_name", "").lower():
ret["changes"]["display_name"] = {
"old": policy.get("display_name"),
"new": display_name,
}
if (description or "").lower() != policy.get("description", "").lower():
ret["changes"]["description"] = {
"old": policy.get("description"),
"new": description,
}
rule_changes = __utils__["dictdiffer.deep_diff"](
policy.get("policy_rule", {}), policy_rule or {}
)
if rule_changes:
ret["changes"]["policy_rule"] = rule_changes
meta_changes = __utils__["dictdiffer.deep_diff"](
policy.get("metadata", {}), metadata or {}
)
if meta_changes:
ret["changes"]["metadata"] = meta_changes
param_changes = __utils__["dictdiffer.deep_diff"](
policy.get("parameters", {}), parameters or {}
)
if param_changes:
ret["changes"]["parameters"] = param_changes
if not ret["changes"]:
ret["result"] = True
ret["comment"] = "Policy definition {} is already present.".format(name)
return ret
if __opts__["test"]:
ret["comment"] = "Policy definition {} would be updated.".format(name)
ret["result"] = None
return ret
else:
ret["changes"] = {
"old": {},
"new": {
"name": policy_name,
"policy_type": policy_type,
"mode": mode,
"display_name": display_name,
"description": description,
"metadata": metadata,
"parameters": parameters,
"policy_rule": policy_rule,
},
}
if __opts__["test"]:
ret["comment"] = "Policy definition {} would be created.".format(name)
ret["result"] = None
return ret
# Convert OrderedDict to dict
if isinstance(metadata, dict):
metadata = json.loads(json.dumps(metadata))
if isinstance(parameters, dict):
parameters = json.loads(json.dumps(parameters))
policy_kwargs = kwargs.copy()
policy_kwargs.update(connection_auth)
policy = __salt__["azurearm_resource.policy_definition_create_or_update"](
name=policy_name,
policy_rule=policy_rule,
policy_type=policy_type,
mode=mode,
display_name=display_name,
description=description,
metadata=metadata,
parameters=parameters,
**policy_kwargs
)
if "error" not in policy:
ret["result"] = True
ret["comment"] = "Policy definition {} has been created.".format(name)
return ret
ret["comment"] = "Failed to create policy definition {}! ({})".format(
name, policy.get("error")
)
return ret
@_deprecation_message
def policy_definition_absent(name, connection_auth=None):
"""
.. versionadded:: 2019.2.0
Ensure a policy definition does not exist in the current subscription.
:param name:
Name of the policy definition.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
policy = __salt__["azurearm_resource.policy_definition_get"](
name, azurearm_log_level="info", **connection_auth
)
if "error" in policy:
ret["result"] = True
ret["comment"] = "Policy definition {} is already absent.".format(name)
return ret
elif __opts__["test"]:
ret["comment"] = "Policy definition {} would be deleted.".format(name)
ret["result"] = None
ret["changes"] = {
"old": policy,
"new": {},
}
return ret
deleted = __salt__["azurearm_resource.policy_definition_delete"](
name, **connection_auth
)
if deleted:
ret["result"] = True
ret["comment"] = "Policy definition {} has been deleted.".format(name)
ret["changes"] = {"old": policy, "new": {}}
return ret
ret["comment"] = "Failed to delete policy definition {}!".format(name)
return ret
@_deprecation_message
def policy_assignment_present(
name,
scope,
definition_name,
display_name=None,
description=None,
assignment_type=None,
parameters=None,
connection_auth=None,
**kwargs
):
"""
.. versionadded:: 2019.2.0
Ensure a security policy assignment exists.
:param name:
Name of the policy assignment.
:param scope:
The scope of the policy assignment.
:param definition_name:
The name of the policy definition to assign.
:param display_name:
The display name of the policy assignment.
:param description:
The policy assignment description.
:param assignment_type:
The type of policy assignment.
:param parameters:
Required dictionary if a parameter is used in the policy rule.
:param connection_auth:
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
Example usage:
.. code-block:: yaml
Ensure policy assignment exists:
azurearm_resource.policy_assignment_present:
- name: testassign
- scope: /subscriptions/bc75htn-a0fhsi-349b-56gh-4fghti-f84852
- definition_name: testpolicy
- display_name: Test Assignment
- description: Test assignment for testing assignments.
- connection_auth: {{ profile }}
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
policy = __salt__["azurearm_resource.policy_assignment_get"](
name, scope, azurearm_log_level="info", **connection_auth
)
if "error" not in policy:
if (
assignment_type
and assignment_type.lower() != policy.get("type", "").lower()
):
ret["changes"]["type"] = {"old": policy.get("type"), "new": assignment_type}
if scope.lower() != policy["scope"].lower():
ret["changes"]["scope"] = {"old": policy["scope"], "new": scope}
pa_name = policy["policy_definition_id"].split("/")[-1]
if definition_name.lower() != pa_name.lower():
ret["changes"]["definition_name"] = {"old": pa_name, "new": definition_name}
if (display_name or "").lower() != policy.get("display_name", "").lower():
ret["changes"]["display_name"] = {
"old": policy.get("display_name"),
"new": display_name,
}
if (description or "").lower() != policy.get("description", "").lower():
ret["changes"]["description"] = {
"old": policy.get("description"),
"new": description,
}
param_changes = __utils__["dictdiffer.deep_diff"](
policy.get("parameters", {}), parameters or {}
)
if param_changes:
ret["changes"]["parameters"] = param_changes
if not ret["changes"]:
ret["result"] = True
ret["comment"] = "Policy assignment {} is already present.".format(name)
return ret
if __opts__["test"]:
ret["comment"] = "Policy assignment {} would be updated.".format(name)
ret["result"] = None
return ret
else:
ret["changes"] = {
"old": {},
"new": {
"name": name,
"scope": scope,
"definition_name": definition_name,
"type": assignment_type,
"display_name": display_name,
"description": description,
"parameters": parameters,
},
}
if __opts__["test"]:
ret["comment"] = "Policy assignment {} would be created.".format(name)
ret["result"] = None
return ret
if isinstance(parameters, dict):
parameters = json.loads(json.dumps(parameters))
policy_kwargs = kwargs.copy()
policy_kwargs.update(connection_auth)
policy = __salt__["azurearm_resource.policy_assignment_create"](
name=name,
scope=scope,
definition_name=definition_name,
type=assignment_type,
display_name=display_name,
description=description,
parameters=parameters,
**policy_kwargs
)
if "error" not in policy:
ret["result"] = True
ret["comment"] = "Policy assignment {} has been created.".format(name)
return ret
ret["comment"] = "Failed to create policy assignment {}! ({})".format(
name, policy.get("error")
)
return ret
@_deprecation_message
def policy_assignment_absent(name, scope, connection_auth=None):
"""
.. versionadded:: 2019.2.0
Ensure a policy assignment does not exist in the provided scope.
:param name:
Name of the policy assignment.
:param scope:
The scope of the policy assignment.
connection_auth
A dict with subscription and authentication parameters to be used in connecting to the
Azure Resource Manager API.
"""
ret = {"name": name, "result": False, "comment": "", "changes": {}}
if not isinstance(connection_auth, dict):
ret[
"comment"
] = "Connection information must be specified via connection_auth dictionary!"
return ret
policy = __salt__["azurearm_resource.policy_assignment_get"](
name, scope, azurearm_log_level="info", **connection_auth
)
if "error" in policy:
ret["result"] = True
ret["comment"] = "Policy assignment {} is already absent.".format(name)
return ret
elif __opts__["test"]:
ret["comment"] = "Policy assignment {} would be deleted.".format(name)
ret["result"] = None
ret["changes"] = {
"old": policy,
"new": {},
}
return ret
deleted = __salt__["azurearm_resource.policy_assignment_delete"](
name, scope, **connection_auth
)
if deleted:
ret["result"] = True
ret["comment"] = "Policy assignment {} has been deleted.".format(name)
ret["changes"] = {"old": policy, "new": {}}
return ret
ret["comment"] = "Failed to delete policy assignment {}!".format(name)
return ret

View file

@ -1,338 +0,0 @@
"""
Azure (ARM) Utilities
.. versionadded:: 2019.2.0
:maintainer: <devops@eitr.tech>
:maturity: new
:depends:
* `azure <https://pypi.python.org/pypi/azure>`_ >= 2.0.0rc6
* `azure-common <https://pypi.python.org/pypi/azure-common>`_ >= 1.1.4
* `azure-mgmt <https://pypi.python.org/pypi/azure-mgmt>`_ >= 0.30.0rc6
* `azure-mgmt-compute <https://pypi.python.org/pypi/azure-mgmt-compute>`_ >= 0.33.0
* `azure-mgmt-network <https://pypi.python.org/pypi/azure-mgmt-network>`_ >= 0.30.0rc6
* `azure-mgmt-resource <https://pypi.python.org/pypi/azure-mgmt-resource>`_ >= 0.30.0
* `azure-mgmt-storage <https://pypi.python.org/pypi/azure-mgmt-storage>`_ >= 0.30.0rc6
* `azure-mgmt-web <https://pypi.python.org/pypi/azure-mgmt-web>`_ >= 0.30.0rc6
* `azure-storage <https://pypi.python.org/pypi/azure-storage>`_ >= 0.32.0
* `msrestazure <https://pypi.python.org/pypi/msrestazure>`_ >= 0.4.21
:platform: linux
"""
import importlib
import logging
import sys
from operator import itemgetter
import salt.config
import salt.loader
import salt.utils.args
import salt.utils.stringutils
import salt.utils.versions
import salt.version
from salt.exceptions import SaltInvocationError, SaltSystemExit
try:
from azure.common.credentials import (
ServicePrincipalCredentials,
UserPassCredentials,
)
from msrestazure.azure_cloud import (
MetadataEndpointError,
get_cloud_from_metadata_endpoint,
)
HAS_AZURE = True
except ImportError:
HAS_AZURE = False
__opts__ = salt.config.minion_config("/etc/salt/minion")
__salt__ = salt.loader.minion_mods(__opts__)
log = logging.getLogger(__name__)
def __virtual__():
if not HAS_AZURE:
return False
else:
return True
def _determine_auth(**kwargs):
"""
Acquire Azure ARM Credentials
"""
if "profile" in kwargs:
azure_credentials = __salt__["config.option"](kwargs["profile"])
kwargs.update(azure_credentials)
service_principal_creds_kwargs = ["client_id", "secret", "tenant"]
user_pass_creds_kwargs = ["username", "password"]
try:
if kwargs.get("cloud_environment") and kwargs.get(
"cloud_environment"
).startswith("http"):
cloud_env = get_cloud_from_metadata_endpoint(kwargs["cloud_environment"])
else:
cloud_env_module = importlib.import_module("msrestazure.azure_cloud")
cloud_env = getattr(
cloud_env_module, kwargs.get("cloud_environment", "AZURE_PUBLIC_CLOUD")
)
except (AttributeError, ImportError, MetadataEndpointError):
raise sys.exit(
"The Azure cloud environment {} is not available.".format(
kwargs["cloud_environment"]
)
)
if set(service_principal_creds_kwargs).issubset(kwargs):
if not (kwargs["client_id"] and kwargs["secret"] and kwargs["tenant"]):
raise SaltInvocationError(
"The client_id, secret, and tenant parameters must all be "
"populated if using service principals."
)
else:
credentials = ServicePrincipalCredentials(
kwargs["client_id"],
kwargs["secret"],
tenant=kwargs["tenant"],
cloud_environment=cloud_env,
)
elif set(user_pass_creds_kwargs).issubset(kwargs):
if not (kwargs["username"] and kwargs["password"]):
raise SaltInvocationError(
"The username and password parameters must both be "
"populated if using username/password authentication."
)
else:
credentials = UserPassCredentials(
kwargs["username"], kwargs["password"], cloud_environment=cloud_env
)
elif "subscription_id" in kwargs:
try:
from msrestazure.azure_active_directory import MSIAuthentication
credentials = MSIAuthentication(cloud_environment=cloud_env)
except ImportError:
raise SaltSystemExit(
msg=(
"MSI authentication support not availabe (requires msrestazure >="
" 0.4.14)"
)
)
else:
raise SaltInvocationError(
"Unable to determine credentials. "
"A subscription_id with username and password, "
"or client_id, secret, and tenant or a profile with the "
"required parameters populated"
)
if "subscription_id" not in kwargs:
raise SaltInvocationError("A subscription_id must be specified")
subscription_id = salt.utils.stringutils.to_str(kwargs["subscription_id"])
return credentials, subscription_id, cloud_env
def get_client(client_type, **kwargs):
"""
Dynamically load the selected client and return a management client object
"""
client_map = {
"compute": "ComputeManagement",
"authorization": "AuthorizationManagement",
"dns": "DnsManagement",
"storage": "StorageManagement",
"managementlock": "ManagementLock",
"monitor": "MonitorManagement",
"network": "NetworkManagement",
"policy": "Policy",
"resource": "ResourceManagement",
"subscription": "Subscription",
"web": "WebSiteManagement",
}
if client_type not in client_map:
raise SaltSystemExit(
msg="The Azure ARM client_type {} specified can not be found.".format(
client_type
)
)
map_value = client_map[client_type]
if client_type in ["policy", "subscription"]:
module_name = "resource"
elif client_type in ["managementlock"]:
module_name = "resource.locks"
else:
module_name = client_type
try:
client_module = importlib.import_module("azure.mgmt." + module_name)
# pylint: disable=invalid-name
Client = getattr(client_module, "{}Client".format(map_value))
except ImportError:
raise sys.exit("The azure {} client is not available.".format(client_type))
credentials, subscription_id, cloud_env = _determine_auth(**kwargs)
if client_type == "subscription":
client = Client(
credentials=credentials,
base_url=cloud_env.endpoints.resource_manager,
)
else:
client = Client(
credentials=credentials,
subscription_id=subscription_id,
base_url=cloud_env.endpoints.resource_manager,
)
client.config.add_user_agent("Salt/{}".format(salt.version.__version__))
return client
def log_cloud_error(client, message, **kwargs):
"""
Log an azurearm cloud error exception
"""
try:
cloud_logger = getattr(log, kwargs.get("azurearm_log_level"))
except (AttributeError, TypeError):
cloud_logger = getattr(log, "error")
cloud_logger(
"An AzureARM %s CloudError has occurred: %s", client.capitalize(), message
)
return
def paged_object_to_list(paged_object):
"""
Extract all pages within a paged object as a list of dictionaries
"""
paged_return = []
while True:
try:
page = next(paged_object)
paged_return.append(page.as_dict())
except StopIteration:
break
return paged_return
def create_object_model(module_name, object_name, **kwargs):
"""
Assemble an object from incoming parameters.
"""
object_kwargs = {}
try:
model_module = importlib.import_module(
"azure.mgmt.{}.models".format(module_name)
)
# pylint: disable=invalid-name
Model = getattr(model_module, object_name)
except ImportError:
raise sys.exit(
"The {} model in the {} Azure module is not available.".format(
object_name, module_name
)
)
if "_attribute_map" in dir(Model):
for attr, items in Model._attribute_map.items():
param = kwargs.get(attr)
if param is not None:
if items["type"][0].isupper() and isinstance(param, dict):
object_kwargs[attr] = create_object_model(
module_name, items["type"], **param
)
elif items["type"][0] == "{" and isinstance(param, dict):
object_kwargs[attr] = param
elif items["type"][0] == "[" and isinstance(param, list):
obj_list = []
for list_item in param:
if items["type"][1].isupper() and isinstance(list_item, dict):
obj_list.append(
create_object_model(
module_name,
items["type"][
items["type"].index("[")
+ 1 : items["type"].rindex("]")
],
**list_item
)
)
elif items["type"][1] == "{" and isinstance(list_item, dict):
obj_list.append(list_item)
elif not items["type"][1].isupper() and items["type"][1] != "{":
obj_list.append(list_item)
object_kwargs[attr] = obj_list
else:
object_kwargs[attr] = param
# wrap calls to this function to catch TypeError exceptions
return Model(**object_kwargs)
def compare_list_of_dicts(old, new, convert_id_to_name=None):
"""
Compare lists of dictionaries representing Azure objects. Only keys found in the "new" dictionaries are compared to
the "old" dictionaries, since getting Azure objects from the API returns some read-only data which should not be
used in the comparison. A list of parameter names can be passed in order to compare a bare object name to a full
Azure ID path for brevity. If string types are found in values, comparison is case insensitive. Return comment
should be used to trigger exit from the calling function.
"""
ret = {}
if not convert_id_to_name:
convert_id_to_name = []
if not isinstance(new, list):
ret["comment"] = "must be provided as a list of dictionaries!"
return ret
if len(new) != len(old):
ret["changes"] = {"old": old, "new": new}
return ret
try:
local_configs, remote_configs = (
sorted(config, key=itemgetter("name")) for config in (new, old)
)
except TypeError:
ret["comment"] = "configurations must be provided as a list of dictionaries!"
return ret
except KeyError:
ret["comment"] = 'configuration dictionaries must contain the "name" key!'
return ret
for idx, val in enumerate(local_configs):
for key in val:
local_val = val[key]
if key in convert_id_to_name:
remote_val = (
remote_configs[idx].get(key, {}).get("id", "").split("/")[-1]
)
else:
remote_val = remote_configs[idx].get(key)
if isinstance(local_val, str):
local_val = local_val.lower()
if isinstance(remote_val, str):
remote_val = remote_val.lower()
if local_val != remote_val:
ret["changes"] = {"old": remote_configs, "new": local_configs}
return ret
return ret

View file

@ -1,189 +0,0 @@
"""
.. versionadded:: 2015.8.0
Utilities for accessing storage container blobs on Azure
"""
import logging
from salt.exceptions import SaltSystemExit
HAS_LIBS = False
try:
import azure
HAS_LIBS = True
except ImportError:
pass
log = logging.getLogger(__name__)
def get_storage_conn(storage_account=None, storage_key=None, opts=None):
"""
.. versionadded:: 2015.8.0
Return a storage_conn object for the storage account
"""
if opts is None:
opts = {}
if not storage_account:
storage_account = opts.get("storage_account", None)
if not storage_key:
storage_key = opts.get("storage_key", None)
return azure.storage.BlobService(storage_account, storage_key)
def list_blobs(storage_conn=None, **kwargs):
"""
.. versionadded:: 2015.8.0
List blobs associated with the container
"""
if not storage_conn:
storage_conn = get_storage_conn(opts=kwargs)
if "container" not in kwargs:
raise SaltSystemExit(
code=42, msg='An storage container name must be specified as "container"'
)
data = storage_conn.list_blobs(
container_name=kwargs["container"],
prefix=kwargs.get("prefix", None),
marker=kwargs.get("marker", None),
maxresults=kwargs.get("maxresults", None),
include=kwargs.get("include", None),
delimiter=kwargs.get("delimiter", None),
)
ret = {}
for item in data.blobs:
ret[item.name] = object_to_dict(item)
return ret
def put_blob(storage_conn=None, **kwargs):
"""
.. versionadded:: 2015.8.0
Upload a blob
"""
if not storage_conn:
storage_conn = get_storage_conn(opts=kwargs)
if "container" not in kwargs:
raise SaltSystemExit(
code=42, msg='The blob container name must be specified as "container"'
)
if "name" not in kwargs:
raise SaltSystemExit(code=42, msg='The blob name must be specified as "name"')
if "blob_path" not in kwargs and "blob_content" not in kwargs:
raise SaltSystemExit(
code=42,
msg=(
'Either a path to a file needs to be passed in as "blob_path" '
'or the contents of a blob as "blob_content."'
),
)
blob_kwargs = {
"container_name": kwargs["container"],
"blob_name": kwargs["name"],
"cache_control": kwargs.get("cache_control", None),
"content_language": kwargs.get("content_language", None),
"content_md5": kwargs.get("content_md5", None),
"x_ms_blob_content_type": kwargs.get("blob_content_type", None),
"x_ms_blob_content_encoding": kwargs.get("blob_content_encoding", None),
"x_ms_blob_content_language": kwargs.get("blob_content_language", None),
"x_ms_blob_content_md5": kwargs.get("blob_content_md5", None),
"x_ms_blob_cache_control": kwargs.get("blob_cache_control", None),
"x_ms_meta_name_values": kwargs.get("meta_name_values", None),
"x_ms_lease_id": kwargs.get("lease_id", None),
}
if "blob_path" in kwargs:
data = storage_conn.put_block_blob_from_path(
file_path=kwargs["blob_path"], **blob_kwargs
)
elif "blob_content" in kwargs:
data = storage_conn.put_block_blob_from_bytes(
blob=kwargs["blob_content"], **blob_kwargs
)
return data
def get_blob(storage_conn=None, **kwargs):
"""
.. versionadded:: 2015.8.0
Download a blob
"""
if not storage_conn:
storage_conn = get_storage_conn(opts=kwargs)
if "container" not in kwargs:
raise SaltSystemExit(
code=42, msg='The blob container name must be specified as "container"'
)
if "name" not in kwargs:
raise SaltSystemExit(code=42, msg='The blob name must be specified as "name"')
if "local_path" not in kwargs and "return_content" not in kwargs:
raise SaltSystemExit(
code=42,
msg=(
'Either a local path needs to be passed in as "local_path", '
'or "return_content" to return the blob contents directly'
),
)
blob_kwargs = {
"container_name": kwargs["container"],
"blob_name": kwargs["name"],
"snapshot": kwargs.get("snapshot", None),
"x_ms_lease_id": kwargs.get("lease_id", None),
"progress_callback": kwargs.get("progress_callback", None),
"max_connections": kwargs.get("max_connections", 1),
"max_retries": kwargs.get("max_retries", 5),
"retry_wait": kwargs.get("retry_wait", 1),
}
if "local_path" in kwargs:
data = storage_conn.get_blob_to_path(
file_path=kwargs["local_path"],
open_mode=kwargs.get("open_mode", "wb"),
**blob_kwargs
)
elif "return_content" in kwargs:
data = storage_conn.get_blob_to_bytes(**blob_kwargs)
return data
def object_to_dict(obj):
"""
.. versionadded:: 2015.8.0
Convert an object to a dictionary
"""
if isinstance(obj, list) or isinstance(obj, tuple):
ret = []
for item in obj:
ret.append(object_to_dict(item))
elif hasattr(obj, "__dict__"):
ret = {}
for item in obj.__dict__:
if item.startswith("_"):
continue
ret[item] = object_to_dict(obj.__dict__[item])
else:
ret = obj
return ret

View file

@ -1,66 +0,0 @@
"""
:codeauthor: Nicole Thomas <nicole@saltstack.com>
"""
import logging
import pytest
from salt.utils.versions import Version
from tests.integration.cloud.helpers.cloud_test_base import CloudTest
try:
import azure # pylint: disable=unused-import
HAS_AZURE = True
except ImportError:
HAS_AZURE = False
if HAS_AZURE and not hasattr(azure, "__version__"):
import azure.common
log = logging.getLogger(__name__)
TIMEOUT = 1000
REQUIRED_AZURE = "1.1.0"
def __has_required_azure():
"""
Returns True/False if the required version of the Azure SDK is installed.
"""
if HAS_AZURE:
if hasattr(azure, "__version__"):
version = Version(azure.__version__)
else:
version = Version(azure.common.__version__)
if Version(REQUIRED_AZURE) <= version:
return True
return False
@pytest.mark.skipif(
not HAS_AZURE, reason="These tests require the Azure Python SDK to be installed."
)
@pytest.mark.skipif(
not __has_required_azure(),
reason="The Azure Python SDK must be >= {}.".format(REQUIRED_AZURE),
)
class AzureTest(CloudTest):
"""
Integration tests for the Azure cloud provider in Salt-Cloud
"""
PROVIDER = "azurearm"
REQUIRED_PROVIDER_CONFIG_ITEMS = ("subscription_id",)
def test_instance(self):
"""
Test creating an instance on Azure
"""
# check if instance with salt installed returned
ret_val = self.run_cloud(
"-p azure-test {}".format(self.instance_name), timeout=TIMEOUT
)
self.assertInstanceExists(ret_val)
self.assertDestroyInstance(timeout=TIMEOUT)

View file

@ -1,8 +0,0 @@
azure-test:
provider: azurearm-config
image: 'b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04-LTS-amd64-server-20140724-en-us-30GB'
size: Standard_D1
slot: production
ssh_username: ''
ssh_password: ''
script_args: '-P'

View file

@ -1,16 +0,0 @@
azurearm-config:
driver: azurearm
subscription_id: ''
cleanup_disks: True
cleanup_interfaces: True
cleanup_vhds: True
cleanup_services: True
minion:
master_type: str
username: ''
password: ''
location: ''
network_resource_group: ''
network: ''
subnet: ''
resource_group: ''

View file

@ -1,161 +0,0 @@
import types
import pytest
from salt.cloud.clouds import azurearm as azure
from tests.support.mock import MagicMock, create_autospec, patch
def copy_func(func, globals=None):
# I do not know that this is complete, but it's sufficient for now.
# The key to "moving" the function to another module (or stubbed module)
# is to update __globals__.
copied_func = types.FunctionType(
func.__code__, globals, func.__name__, func.__defaults__, func.__closure__
)
copied_func.__module__ = func.__module__
copied_func.__doc__ = func.__doc__
copied_func.__kwdefaults__ = func.__kwdefaults__
copied_func.__dict__.update(func.__dict__)
return copied_func
def mock_module(mod, sut=None):
if sut is None:
sut = [None]
mock = create_autospec(mod)
# we need to provide a '__globals__' so functions being tested behave correctly.
mock_globals = {}
# exclude the system under test
for name in sut:
attr = getattr(mod, name)
if isinstance(attr, types.FunctionType):
attr = copy_func(attr, mock_globals)
setattr(mock, name, attr)
# fully populate our mock_globals
for name in mod.__dict__:
if name in mock.__dict__:
mock_globals[name] = mock.__dict__[name]
elif type(getattr(mod, name)) is type(types): # is a module
mock_globals[name] = getattr(mock, name)
else:
mock_globals[name] = mod.__dict__[name]
return mock
@pytest.fixture
def configure_loader_modules():
return {azure: {"__opts__": {}, "__active_provider_name__": None}}
@pytest.mark.skipif(not azure.HAS_LIBS, reason="azure not available")
def test_function_signatures():
mock_azure = mock_module(azure, sut=["request_instance", "__opts__", "__utils__"])
mock_azure.create_network_interface.return_value = [
MagicMock(),
MagicMock(),
MagicMock(),
]
mock_azure.salt.utils.stringutils.to_str.return_value = "P4ssw0rd"
mock_azure.salt.utils.cloud.gen_keys.return_value = [MagicMock(), MagicMock()]
mock_azure.__opts__["pki_dir"] = None
mock_azure.request_instance.__globals__[
"__builtins__"
] = mock_azure.request_instance.__globals__["__builtins__"].copy()
mock_azure.request_instance.__globals__["__builtins__"]["getattr"] = MagicMock()
mock_azure.__utils__["cloud.fire_event"] = mock_azure.salt.utils.cloud.fire_event
mock_azure.__utils__[
"cloud.filter_event"
] = mock_azure.salt.utils.cloud.filter_event
mock_azure.__opts__["sock_dir"] = MagicMock()
mock_azure.__opts__["transport"] = MagicMock()
mock_azure.request_instance(
{"image": "http://img", "storage_account": "blah", "size": ""}
)
# we literally only check that a final creation call occurred.
mock_azure.get_conn.return_value.virtual_machines.create_or_update.assert_called_once()
def test_get_configured_provider():
mock_azure = mock_module(
azure, sut=["get_configured_provider", "__opts__", "__utils__"]
)
good_combos = [
{
"subscription_id": "3287abc8-f98a-c678-3bde-326766fd3617",
"tenant": "ABCDEFAB-1234-ABCD-1234-ABCDEFABCDEF",
"client_id": "ABCDEFAB-1234-ABCD-1234-ABCDEFABCDEF",
"secret": "XXXXXXXXXXXXXXXXXXXXXXXX",
},
{
"subscription_id": "3287abc8-f98a-c678-3bde-326766fd3617",
"username": "larry",
"password": "123pass",
},
{"subscription_id": "3287abc8-f98a-c678-3bde-326766fd3617"},
]
for combo in good_combos:
mock_azure.__opts__["providers"] = {"azure_test": {"azurearm": combo}}
assert azure.get_configured_provider() == combo
bad_combos = [
{"subscrption": "3287abc8-f98a-c678-3bde-326766fd3617"},
{},
]
for combo in bad_combos:
mock_azure.__opts__["providers"] = {"azure_test": {"azurearm": combo}}
assert not azure.get_configured_provider()
def test_get_conn():
mock_azure = mock_module(azure, sut=["get_conn", "__opts__", "__utils__"])
mock_azure.__opts__["providers"] = {
"azure_test": {
"azurearm": {
"subscription_id": "3287abc8-f98a-c678-3bde-326766fd3617",
"driver": "azurearm",
"password": "monkeydonkey",
}
}
}
# password is stripped if username not provided
expected = {"subscription_id": "3287abc8-f98a-c678-3bde-326766fd3617"}
with patch(
"salt.utils.azurearm.get_client", side_effect=lambda client_type, **kw: kw
):
assert azure.get_conn(client_type="compute") == expected
mock_azure.__opts__["providers"] = {
"azure_test": {
"azurearm": {
"subscription_id": "3287abc8-f98a-c678-3bde-326766fd3617",
"driver": "azurearm",
"username": "donkeymonkey",
"password": "monkeydonkey",
}
}
}
# username and password via provider config
expected = {
"subscription_id": "3287abc8-f98a-c678-3bde-326766fd3617",
"username": "donkeymonkey",
"password": "monkeydonkey",
}
with patch(
"salt.utils.azurearm.get_client", side_effect=lambda client_type, **kw: kw
):
assert azure.get_conn(client_type="compute") == expected

View file

@ -1,96 +0,0 @@
"""
Unit test for salt.grains.metadata_azure
:codeauthor: :email" `Vishal Gupta <guvishal@vmware.com>
"""
import logging
import pytest
import salt.grains.metadata_azure as metadata
import salt.utils.http as http
from tests.support.mock import create_autospec, patch
# from Exception import Exception, ValueError
log = logging.getLogger(__name__)
@pytest.fixture
def configure_loader_modules():
return {metadata: {"__opts__": {"metadata_server_grains": "True"}}}
def test_metadata_azure_search():
def mock_http(url="", headers=False, header_list=None):
metadata_vals = {
"http://169.254.169.254/metadata/instance?api-version=2020-09-01": {
"body": '{"compute": {"test": "fulltest"}}',
"headers": {"Content-Type": "application/json; charset=utf-8"},
},
}
return metadata_vals[url]
with patch(
"salt.utils.http.query",
create_autospec(http.query, autospec=True, side_effect=mock_http),
):
assert metadata.metadata() == {"compute": {"test": "fulltest"}}
def test_metadata_virtual():
print("running 1st")
with patch(
"salt.utils.http.query",
create_autospec(
http.query,
autospec=True,
return_value={
"error": "Bad request: . Required metadata header not specified"
},
),
):
assert metadata.__virtual__() is False
with patch(
"salt.utils.http.query",
create_autospec(
http.query,
autospec=True,
return_value={
"body": '{"compute": {"test": "fulltest"}}',
"headers": {"Content-Type": "application/json; charset=utf-8"},
"status": 200,
},
),
):
assert metadata.__virtual__() is True
with patch(
"salt.utils.http.query",
create_autospec(
http.query,
autospec=True,
return_value={
"body": "test",
"headers": {"Content-Type": "application/json; charset=utf-8"},
"status": 404,
},
),
):
assert metadata.__virtual__() is False
with patch(
"salt.utils.http.query",
create_autospec(
http.query,
autospec=True,
return_value={
"body": "test",
"headers": {"Content-Type": "application/json; charset=utf-8"},
"status": 400,
},
),
):
assert metadata.__virtual__() is False

View file

@ -1,182 +0,0 @@
import logging
import pytest
import salt.config
import salt.loader
import salt.modules.azurearm_dns as azurearm_dns
from tests.support.mock import MagicMock
from tests.support.sminion import create_sminion
HAS_LIBS = False
try:
import azure.mgmt.dns.models # pylint: disable=import-error
HAS_LIBS = True
except ImportError:
HAS_LIBS = False
log = logging.getLogger(__name__)
pytestmark = [
pytest.mark.skipif(
HAS_LIBS is False, reason="The azure.mgmt.dns module must be installed."
),
]
class AzureObjMock:
"""
mock azure object for as_dict calls
"""
args = None
kwargs = None
def __init__(self, args, kwargs, return_value=None):
self.args = args
self.kwargs = kwargs
self.__return_value = return_value
def __getattr__(self, item):
return self
def __call__(self, *args, **kwargs):
return MagicMock(return_value=self.__return_value)()
def as_dict(self, *args, **kwargs):
return self.args, self.kwargs
class AzureFuncMock:
"""
mock azure client function calls
"""
def __init__(self, return_value=None):
self.__return_value = return_value
def __getattr__(self, item):
return self
def __call__(self, *args, **kwargs):
return MagicMock(return_value=self.__return_value)()
def create_or_update(self, *args, **kwargs):
azure_obj = AzureObjMock(args, kwargs)
return azure_obj
class AzureSubMock:
"""
mock azure client sub-modules
"""
record_sets = AzureFuncMock()
zones = AzureFuncMock()
def __init__(self, return_value=None):
self.__return_value = return_value
def __getattr__(self, item):
return self
def __call__(self, *args, **kwargs):
return MagicMock(return_value=self.__return_value)()
class AzureClientMock:
"""
mock azure client
"""
def __init__(self, return_value=AzureSubMock):
self.__return_value = return_value
def __getattr__(self, item):
return self
def __call__(self, *args, **kwargs):
return MagicMock(return_value=self.__return_value)()
@pytest.fixture
def credentials():
azurearm_dns.__virtual__()
return {
"client_id": "CLIENT_ID",
"secret": "SECRET",
"subscription_id": "SUBSCRIPTION_ID",
"tenant": "TENANT",
}
@pytest.fixture
def configure_loader_modules():
"""
setup loader modules and override the azurearm.get_client utility
"""
minion_config = create_sminion().opts.copy()
utils = salt.loader.utils(minion_config)
funcs = salt.loader.minion_mods(
minion_config, utils=utils, whitelist=["azurearm_dns", "config"]
)
utils["azurearm.get_client"] = AzureClientMock()
return {
azurearm_dns: {"__utils__": utils, "__salt__": funcs},
}
def test_record_set_create_or_update(credentials):
"""
tests record set object creation
"""
expected = {
"if_match": None,
"if_none_match": None,
"parameters": {"arecords": [{"ipv4_address": "10.0.0.1"}], "ttl": 300},
"record_type": "A",
"relative_record_set_name": "myhost",
"resource_group_name": "testgroup",
"zone_name": "myzone",
}
record_set_args, record_set_kwargs = azurearm_dns.record_set_create_or_update(
"myhost",
"myzone",
"testgroup",
"A",
arecords=[{"ipv4_address": "10.0.0.1"}],
ttl=300,
**credentials
)
for key, val in record_set_kwargs.items():
if isinstance(val, azure.mgmt.dns.models.RecordSet):
record_set_kwargs[key] = val.as_dict()
assert record_set_kwargs == expected
def test_zone_create_or_update(credentials):
"""
tests zone object creation
"""
expected = {
"if_match": None,
"if_none_match": None,
"parameters": {"location": "global", "zone_type": "Public"},
"resource_group_name": "testgroup",
"zone_name": "myzone",
}
zone_args, zone_kwargs = azurearm_dns.zone_create_or_update(
"myzone", "testgroup", **credentials
)
for key, val in zone_kwargs.items():
if isinstance(val, azure.mgmt.dns.models.Zone):
zone_kwargs[key] = val.as_dict()
assert zone_kwargs == expected

View file

@ -1,333 +0,0 @@
"""
Tests for the Azure Blob External Pillar.
"""
import pickle
import time
import pytest
import salt.config
import salt.loader
import salt.pillar.azureblob as azureblob
import salt.utils.files
from tests.support.mock import MagicMock, patch
HAS_LIBS = False
try:
# pylint: disable=no-name-in-module
from azure.storage.blob import BlobServiceClient
# pylint: enable=no-name-in-module
HAS_LIBS = True
except ImportError:
pass
pytestmark = [
pytest.mark.skipif(
HAS_LIBS is False,
reason="The azure.storage.blob module must be installed.",
)
]
class MockBlob(dict):
"""
Creates a Mock Blob object.
"""
name = ""
def __init__(self):
super().__init__(
{
"container": None,
"name": "test.sls",
"prefix": None,
"delimiter": "/",
"results_per_page": None,
"location_mode": None,
}
)
class MockContainerClient:
"""
Creates a Mock ContainerClient.
"""
def __init__(self):
pass
def walk_blobs(self, *args, **kwargs):
yield MockBlob()
def get_blob_client(self, *args, **kwargs):
pass
class MockBlobServiceClient:
"""
Creates a Mock BlobServiceClient.
"""
def __init__(self):
pass
def get_container_client(self, *args, **kwargs):
container_client = MockContainerClient()
return container_client
@pytest.fixture
def cachedir(tmp_path):
dirname = tmp_path / "cachedir"
dirname.mkdir(parents=True, exist_ok=True)
return dirname
@pytest.fixture
def configure_loader_modules(cachedir, tmp_path):
base_pillar = tmp_path / "base"
prod_pillar = tmp_path / "prod"
base_pillar.mkdir(parents=True, exist_ok=True)
prod_pillar.mkdir(parents=True, exist_ok=True)
pillar_roots = {
"base": [str(base_pillar)],
"prod": [str(prod_pillar)],
}
opts = {
"cachedir": cachedir,
"pillar_roots": pillar_roots,
}
return {
azureblob: {"__opts__": opts},
}
def test__init_expired(tmp_path):
"""
Tests the result of _init when the cache is expired.
"""
container = "test"
multiple_env = False
environment = "base"
blob_cache_expire = 0 # The cache will be expired
blob_client = MockBlobServiceClient()
cache_file = tmp_path / "cache_file"
# Patches the _get_containers_cache_filename module so that it returns the name of the new tempfile that
# represents the cache file
with patch.object(
azureblob,
"_get_containers_cache_filename",
MagicMock(return_value=str(cache_file)),
):
# Patches the from_connection_string module of the BlobServiceClient class so that a connection string does
# not need to be given. Additionally it returns example blob data used by the ext_pillar.
with patch.object(
BlobServiceClient,
"from_connection_string",
MagicMock(return_value=blob_client),
):
ret = azureblob._init(
"", container, multiple_env, environment, blob_cache_expire
)
expected = {
"base": {
"test": [
{
"container": None,
"name": "test.sls",
"prefix": None,
"delimiter": "/",
"results_per_page": None,
"location_mode": None,
}
]
}
}
assert ret == expected
def test__init_not_expired(tmp_path):
"""
Tests the result of _init when the cache is not expired.
"""
container = "test"
multiple_env = False
environment = "base"
blob_cache_expire = (time.time()) * (time.time()) # The cache will not be expired
metadata = {
"base": {
"test": [
{"name": "base/secret.sls", "relevant": "include.sls"},
{"name": "blobtest.sls", "irrelevant": "ignore.sls"},
]
}
}
cache_file = tmp_path / "cache_file"
# Pickles the metadata and stores it in cache_file
with salt.utils.files.fopen(str(cache_file), "wb") as fp_:
pickle.dump(metadata, fp_)
# Patches the _get_containers_cache_filename module so that it returns the name of the new tempfile that
# represents the cache file
with patch.object(
azureblob,
"_get_containers_cache_filename",
MagicMock(return_value=str(cache_file)),
):
# Patches the _read_containers_cache_file module so that it returns what it normally would if the new
# tempfile representing the cache file was passed to it
plugged = azureblob._read_containers_cache_file(str(cache_file))
with patch.object(
azureblob,
"_read_containers_cache_file",
MagicMock(return_value=plugged),
):
ret = azureblob._init(
"", container, multiple_env, environment, blob_cache_expire
)
assert ret == metadata
def test__get_cache_dir(cachedir):
"""
Tests the result of _get_cache_dir.
"""
ret = azureblob._get_cache_dir()
assert ret == str(cachedir / "pillar_azureblob")
def test__get_cached_file_name(cachedir):
"""
Tests the result of _get_cached_file_name.
"""
container = "test"
saltenv = "base"
path = "base/secret.sls"
ret = azureblob._get_cached_file_name(container, saltenv, path)
assert ret == str(cachedir / "pillar_azureblob" / saltenv / container / path)
def test__get_containers_cache_filename(cachedir):
"""
Tests the result of _get_containers_cache_filename.
"""
container = "test"
ret = azureblob._get_containers_cache_filename(container)
assert ret == str(cachedir / "pillar_azureblob" / "test-files.cache")
def test__refresh_containers_cache_file(tmp_path):
"""
Tests the result of _refresh_containers_cache_file to ensure that it successfully copies blob data into a
cache file.
"""
blob_client = MockBlobServiceClient()
container = "test"
cache_file = tmp_path / "cache_file"
with patch.object(
BlobServiceClient,
"from_connection_string",
MagicMock(return_value=blob_client),
):
ret = azureblob._refresh_containers_cache_file("", container, str(cache_file))
expected = {
"base": {
"test": [
{
"container": None,
"name": "test.sls",
"prefix": None,
"delimiter": "/",
"results_per_page": None,
"location_mode": None,
}
]
}
}
assert ret == expected
def test__read_containers_cache_file(tmp_path):
"""
Tests the result of _read_containers_cache_file to make sure that it successfully loads in pickled metadata.
"""
metadata = {
"base": {
"test": [
{"name": "base/secret.sls", "relevant": "include.sls"},
{"name": "blobtest.sls", "irrelevant": "ignore.sls"},
]
}
}
cache_file = tmp_path / "cache_file"
# Pickles the metadata and stores it in cache_file
with salt.utils.files.fopen(str(cache_file), "wb") as fp_:
pickle.dump(metadata, fp_)
# Checks to see if _read_containers_cache_file can successfully read the pickled metadata from the cache file
ret = azureblob._read_containers_cache_file(str(cache_file))
assert ret == metadata
def test__find_files():
"""
Tests the result of _find_files. Ensures it only finds files and not directories. Ensures it also ignore
irrelevant files.
"""
metadata = {
"test": [
{"name": "base/secret.sls"},
{"name": "blobtest.sls", "irrelevant": "ignore.sls"},
{"name": "base/"},
]
}
ret = azureblob._find_files(metadata)
assert ret == {"test": ["base/secret.sls", "blobtest.sls"]}
def test__find_file_meta1():
"""
Tests the result of _find_file_meta when the metadata contains a blob with the specified path and a blob
without the specified path.
"""
metadata = {
"base": {
"test": [
{"name": "base/secret.sls", "relevant": "include.sls"},
{"name": "blobtest.sls", "irrelevant": "ignore.sls"},
]
}
}
container = "test"
saltenv = "base"
path = "base/secret.sls"
ret = azureblob._find_file_meta(metadata, container, saltenv, path)
assert ret == {"name": "base/secret.sls", "relevant": "include.sls"}
def test__find_file_meta2():
"""
Tests the result of _find_file_meta when the saltenv in metadata does not match the specified saltenv.
"""
metadata = {"wrong": {"test": [{"name": "base/secret.sls"}]}}
container = "test"
saltenv = "base"
path = "base/secret.sls"
ret = azureblob._find_file_meta(metadata, container, saltenv, path)
assert ret is None
def test__find_file_meta3():
"""
Tests the result of _find_file_meta when the container in metadata does not match the specified metadata.
"""
metadata = {"base": {"wrong": [{"name": "base/secret.sls"}]}}
container = "test"
saltenv = "base"
path = "base/secret.sls"
ret = azureblob._find_file_meta(metadata, container, saltenv, path)
assert ret is None

View file

@ -1,55 +0,0 @@
import logging
import pytest
import salt.utils.azurearm as azurearm
from tests.support.unit import TestCase
# Azure libs
# pylint: disable=import-error
HAS_LIBS = False
try:
import azure.mgmt.compute.models # pylint: disable=unused-import
import azure.mgmt.network.models # pylint: disable=unused-import
HAS_LIBS = True
except ImportError:
pass
# pylint: enable=import-error
log = logging.getLogger(__name__)
MOCK_CREDENTIALS = {
"client_id": "CLIENT_ID",
"secret": "SECRET",
"subscription_id": "SUBSCRIPTION_ID",
"tenant": "TENANT",
}
@pytest.mark.skipif(
HAS_LIBS is False, reason="The azure.mgmt.network module must be installed."
)
class AzureRmUtilsTestCase(TestCase):
def test_create_object_model_vnet(self):
module_name = "network"
object_name = "VirtualNetwork"
vnet = {
"address_space": {"address_prefixes": ["10.0.0.0/8"]},
"enable_ddos_protection": False,
"enable_vm_protection": True,
"tags": {"contact_name": "Elmer Fudd Gantry"},
}
model = azurearm.create_object_model(module_name, object_name, **vnet)
self.assertEqual(vnet, model.as_dict())
def test_create_object_model_nic_ref(self):
module_name = "compute"
object_name = "NetworkInterfaceReference"
ref = {
"id": "/subscriptions/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/resourceGroups/rg/providers/Microsoft.Network/networkInterfaces/nic",
"primary": False,
}
model = azurearm.create_object_model(module_name, object_name, **ref)
self.assertEqual(ref, model.as_dict())