remove openstack driver

This commit is contained in:
Daniel Wallace 2017-05-03 14:21:58 -06:00
parent 45573a1fdf
commit f7d182871e
No known key found for this signature in database
GPG key ID: 5FA5E5544F010D48
5 changed files with 0 additions and 1120 deletions

View file

@ -1,10 +0,0 @@
#my-openstack-hp-config:
# driver: openstack
# identity_url: 'https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/'
# compute_name: Compute
# compute_region: 'az-1.region-a.geo-1'
# tenant: myuser-tenant1
# user: myuser
# ssh_key_name: mykey
# ssh_key_file: '/etc/salt/hpcloud/mykey.pem'
# password: mypass

View file

@ -1,10 +0,0 @@
#my-openstack-rackspace-config:
# driver: openstack
# identity_url: 'https://identity.api.rackspacecloud.com/v2.0/tokens'
# compute_name: cloudServersOpenStack
# protocol: ipv4
# compute_region: DFW
# protocol: ipv4
# user: myuser
# tenant: 5555555
# apikey: 901d3f579h23c8v73q9

View file

@ -1,6 +0,0 @@
===========================
salt.cloud.clouds.openstack
===========================
.. automodule:: salt.cloud.clouds.openstack
:members:

View file

@ -1,185 +0,0 @@
==============================
Getting Started With OpenStack
==============================
OpenStack is one the most popular cloud projects. It's an open source project
to build public and/or private clouds. You can use Salt Cloud to launch
OpenStack instances.
Dependencies
============
* Libcloud >= 0.13.2
Configuration
=============
* Using the new format, set up the cloud configuration at
``/etc/salt/cloud.providers`` or
``/etc/salt/cloud.providers.d/openstack.conf``:
.. code-block:: yaml
my-openstack-config:
# Set the location of the salt-master
#
minion:
master: saltmaster.example.com
# Configure the OpenStack driver
#
identity_url: http://identity.youopenstack.com/v2.0/tokens
compute_name: nova
protocol: ipv4
compute_region: RegionOne
# Configure Openstack authentication credentials
#
user: myname
password: 123456
# tenant is the project name
tenant: myproject
driver: openstack
# skip SSL certificate validation (default false)
insecure: false
.. note::
.. versionchanged:: 2015.8.0
The ``provider`` parameter in cloud provider definitions was renamed to ``driver``. This
change was made to avoid confusion with the ``provider`` parameter that is used in cloud profile
definitions. Cloud provider definitions now use ``driver`` to refer to the Salt cloud module that
provides the underlying functionality to connect to a cloud host, while cloud profiles continue
to use ``provider`` to refer to provider configurations that you define.
Using nova client to get information from OpenStack
===================================================
One of the best ways to get information about OpenStack is using the novaclient
python package (available in pypi as python-novaclient). The client
configuration is a set of environment variables that you can get from the
Dashboard. Log in and then go to Project -> Access & security -> API Access and
download the "OpenStack RC file". Then:
.. code-block:: yaml
source /path/to/your/rcfile
nova credentials
nova endpoints
In the ``nova endpoints`` output you can see the information about
``compute_region`` and ``compute_name``.
Compute Region
==============
It depends on the OpenStack cluster that you are using. Please, have a look at
the previous sections.
Authentication
==============
The ``user`` and ``password`` is the same user as is used to log into the
OpenStack Dashboard.
Profiles
========
Here is an example of a profile:
.. code-block:: yaml
openstack_512:
provider: my-openstack-config
size: m1.tiny
image: cirros-0.3.1-x86_64-uec
ssh_key_file: /tmp/test.pem
ssh_key_name: test
ssh_interface: private_ips
The following list explains some of the important properties.
size
can be one of the options listed in the output of ``nova flavor-list``.
image
can be one of the options listed in the output of ``nova image-list``.
ssh_key_file
The SSH private key that the salt-cloud uses to SSH into the VM after its
first booted in order to execute a command or script. This private key's
*public key* must be the openstack public key inserted into the
authorized_key's file of the VM's root user account.
ssh_key_name
The name of the openstack SSH public key that is inserted into the
authorized_keys file of the VM's root user account. Prior to using this
public key, you must use openstack commands or the horizon web UI to load
that key into the tenant's account. Note that this openstack tenant must be
the one you defined in the cloud provider.
ssh_interface
This option allows you to create a VM without a public IP. If this option
is omitted and the VM does not have a public IP, then the salt-cloud waits
for a certain period of time and then destroys the VM. With the nova drive,
private cloud networks can be defined here.
For more information concerning cloud profiles, see :ref:`here
<salt-cloud-profiles>`.
change_password
~~~~~~~~~~~~~~~
If no ssh_key_file is provided, and the server already exists, change_password
will use the api to change the root password of the server so that it can be
bootstrapped.
.. code-block:: yaml
change_password: True
userdata_file
~~~~~~~~~~~~~
Use `userdata_file` to specify the userdata file to upload for use with
cloud-init if available.
.. code-block:: yaml
my-openstack-config:
# Pass userdata to the instance to be created
userdata_file: /etc/salt/cloud-init/packages.yml
.. note::
As of the 2016.11.4 release, this file can be templated. To use templating,
simply specify a ``userdata_template`` option in the cloud profile:
.. code-block:: yaml
my-openstack-config:
# Pass userdata to the instance to be created
userdata_file: /etc/salt/cloud-init/packages.yml
userdata_template: jinja
If no ``userdata_template`` is set in the cloud profile, then the master
configuration will be checked for a :conf_master:`userdata_template` value.
If this is not set, then no templating will be performed on the
userdata_file.
To disable templating in a cloud profile when a
:conf_master:`userdata_template` has been set in the master configuration
file, simply set ``userdata_template`` to ``False`` in the cloud profile:
.. code-block:: yaml
my-openstack-config:
# Pass userdata to the instance to be created
userdata_file: /etc/salt/cloud-init/packages.yml
userdata_template: False

View file

@ -1,909 +0,0 @@
# -*- coding: utf-8 -*-
'''
OpenStack Cloud Module
======================
OpenStack is an open source project that is in use by a number a cloud
providers, each of which have their own ways of using it.
:depends: libcloud >= 0.13.2
OpenStack provides a number of ways to authenticate. This module uses password-
based authentication, using auth v2.0. It is likely to start supporting other
methods of authentication provided by OpenStack in the future.
Note that there is currently a dependency upon netaddr. This can be installed
on Debian-based systems by means of the python-netaddr package.
This module has been tested to work with HP Cloud and Rackspace. See the
documentation for specific options for either of these providers. Some
examples, using the old cloud configuration syntax, are provided below:
Set up in the cloud configuration at ``/etc/salt/cloud.providers`` or
``/etc/salt/cloud.providers.d/openstack.conf``:
.. code-block:: yaml
my-openstack-config:
# The OpenStack identity service url
identity_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/tokens
# The OpenStack Identity Version (default: 2)
auth_version: 2
# The OpenStack compute region
compute_region: region-b.geo-1
# The OpenStack compute service name
compute_name: Compute
# The OpenStack tenant name (not tenant ID)
tenant: myuser-tenant1
# The OpenStack user name
user: myuser
# The OpenStack keypair name
ssh_key_name: mykey
# Skip SSL certificate validation
insecure: false
# The ssh key file
ssh_key_file: /path/to/keyfile/test.pem
# The OpenStack network UUIDs
networks:
- fixed:
- 4402cd51-37ee-435e-a966-8245956dc0e6
- floating:
- Ext-Net
files:
/path/to/dest.txt:
/local/path/to/src.txt
# Skips the service catalog API endpoint, and uses the following
base_url: http://192.168.1.101:3000/v2/12345
driver: openstack
userdata_file: /tmp/userdata.txt
# config_drive is required for userdata at rackspace
config_drive: True
For in-house Openstack Essex installation, libcloud needs the service_type :
.. code-block:: yaml
my-openstack-config:
identity_url: 'http://control.openstack.example.org:5000/v2.0/'
compute_name : Compute Service
service_type : compute
To use identity v3 for authentication, specify the `domain` and `auth_version`
.. code-block:: yaml
my-openstack-config:
identity_url: 'http://control.openstack.example.org:5000/v3/auth/tokens'
auth_version: 3
compute_name : Compute Service
compute_region: East
service_type : compute
tenant: tenant
domain: testing
user: daniel
password: securepassword
driver: openstack
Either a password or an API key must also be specified:
.. code-block:: yaml
my-openstack-password-or-api-config:
# The OpenStack password
password: letmein
# The OpenStack API key
apikey: 901d3f579h23c8v73q9
Optionally, if you don't want to save plain-text password in your configuration file, you can use keyring:
.. code-block:: yaml
my-openstack-keyring-config:
# The OpenStack password is stored in keyring
# don't forget to set the password by running something like:
# salt-cloud --set-password=myuser my-openstack-keyring-config
password: USE_KEYRING
For local installations that only use private IP address ranges, the
following option may be useful. Using the old syntax:
.. code-block:: yaml
my-openstack-config:
# Ignore IP addresses on this network for bootstrap
ignore_cidr: 192.168.50.0/24
It is possible to upload a small set of files (no more than 5, and nothing too
large) to the remote server. Generally this should not be needed, as salt
itself can upload to the server after it is spun up, with nowhere near the
same restrictions.
.. code-block:: yaml
my-openstack-config:
files:
/path/to/dest.txt:
/local/path/to/src.txt
Alternatively, one could use the private IP to connect by specifying:
.. code-block:: yaml
my-openstack-config:
ssh_interface: private_ips
.. note::
When using floating ips from networks, if the OpenStack driver is unable to
allocate a new ip address for the server, it will check that for
unassociated ip addresses in the floating ip pool. If SaltCloud is running
in parallel mode, it is possible that more than one server will attempt to
use the same ip address.
'''
# Import python libs
from __future__ import absolute_import
import os
import logging
import socket
import pprint
# This import needs to be here so the version check can be done below
import salt.utils.versions
# Import libcloud
try:
import libcloud
from libcloud.compute.base import NodeState
HAS_LIBCLOUD = True
except ImportError:
HAS_LIBCLOUD = False
# These functions require libcloud trunk or >= 0.14.0
HAS014 = False
try:
from libcloud.compute.drivers.openstack import OpenStackNetwork
from libcloud.compute.drivers.openstack import OpenStack_1_1_FloatingIpPool
# This work-around for Issue #32743 is no longer needed for libcloud >= 1.4.0.
# However, older versions of libcloud must still be supported with this work-around.
# This work-around can be removed when the required minimum version of libcloud is
# 2.0.0 (See PR #40837 - which is implemented in Salt Oxygen).
if salt.utils.versions.LooseVersion(libcloud.__version__) < \
salt.utils.versions.LooseVersion('1.4.0'):
# See https://github.com/saltstack/salt/issues/32743
import libcloud.security
libcloud.security.CA_CERTS_PATH.append('/etc/ssl/certs/YaST-CA.pem')
HAS014 = True
except Exception:
pass
# Import generic libcloud functions
from salt.cloud.libcloudfuncs import * # pylint: disable=W0614,W0401
# Import salt libs
import salt.utils.cloud
import salt.utils.files
import salt.utils.pycrypto
import salt.config as config
from salt.exceptions import (
SaltCloudConfigError,
SaltCloudNotFound,
SaltCloudSystemExit,
SaltCloudExecutionFailure,
SaltCloudExecutionTimeout
)
from salt.utils.functools import namespaced_function
# Import netaddr IP matching
try:
from netaddr import all_matching_cidrs
HAS_NETADDR = True
except ImportError:
HAS_NETADDR = False
# Get logging started
log = logging.getLogger(__name__)
__virtualname__ = 'openstack'
# Some of the libcloud functions need to be in the same namespace as the
# functions defined in the module, so we create new function objects inside
# this module namespace
get_size = namespaced_function(get_size, globals())
get_image = namespaced_function(get_image, globals())
avail_locations = namespaced_function(avail_locations, globals())
avail_images = namespaced_function(avail_images, globals())
avail_sizes = namespaced_function(avail_sizes, globals())
script = namespaced_function(script, globals())
destroy = namespaced_function(destroy, globals())
reboot = namespaced_function(reboot, globals())
list_nodes = namespaced_function(list_nodes, globals())
list_nodes_full = namespaced_function(list_nodes_full, globals())
list_nodes_select = namespaced_function(list_nodes_select, globals())
show_instance = namespaced_function(show_instance, globals())
get_node = namespaced_function(get_node, globals())
# Only load in this module is the OPENSTACK configurations are in place
def __virtual__():
'''
Set up the libcloud functions and check for OPENSTACK configurations
'''
if get_configured_provider() is False:
return False
if get_dependencies() is False:
return False
salt.utils.versions.warn_until(
'Oxygen',
'This driver has been deprecated and will be removed in the '
'{version} release of Salt. Please use the nova driver instead.'
)
return __virtualname__
def get_configured_provider():
'''
Return the first configured instance.
'''
return config.is_provider_configured(
__opts__,
__active_provider_name__ or __virtualname__,
('user',)
)
def get_dependencies():
'''
Warn if dependencies aren't met.
'''
deps = {
'libcloud': HAS_LIBCLOUD,
'netaddr': HAS_NETADDR
}
return config.check_driver_dependencies(
__virtualname__,
deps
)
def get_conn():
'''
Return a conn object for the passed VM data
'''
vm_ = get_configured_provider()
driver = get_driver(Provider.OPENSTACK)
authinfo = {
'ex_force_auth_url': config.get_cloud_config_value(
'identity_url', vm_, __opts__, search_global=False
),
'ex_force_service_name': config.get_cloud_config_value(
'compute_name', vm_, __opts__, search_global=False
),
'ex_force_service_region': config.get_cloud_config_value(
'compute_region', vm_, __opts__, search_global=False
),
'ex_tenant_name': config.get_cloud_config_value(
'tenant', vm_, __opts__, search_global=False
),
'ex_domain_name': config.get_cloud_config_value(
'domain', vm_, __opts__, default='Default', search_global=False
),
}
service_type = config.get_cloud_config_value('service_type',
vm_,
__opts__,
search_global=False)
if service_type:
authinfo['ex_force_service_type'] = service_type
base_url = config.get_cloud_config_value('base_url',
vm_,
__opts__,
search_global=False)
if base_url:
authinfo['ex_force_base_url'] = base_url
insecure = config.get_cloud_config_value(
'insecure', vm_, __opts__, search_global=False
)
if insecure:
import libcloud.security
libcloud.security.VERIFY_SSL_CERT = False
user = config.get_cloud_config_value(
'user', vm_, __opts__, search_global=False
)
password = config.get_cloud_config_value(
'password', vm_, __opts__, search_global=False
)
if password is not None:
if config.get_cloud_config_value('auth_version', vm_, __opts__, search_global=False) == 3:
authinfo['ex_force_auth_version'] = '3.x_password'
else:
authinfo['ex_force_auth_version'] = '2.0_password'
log.debug('OpenStack authenticating using password')
if password == 'USE_KEYRING':
# retrieve password from system keyring
credential_id = "salt.cloud.provider.{0}".format(__active_provider_name__)
logging.debug("Retrieving keyring password for {0} ({1})".format(
credential_id,
user)
)
# attempt to retrieve driver specific password first
driver_password = salt.utils.cloud.retrieve_password_from_keyring(
credential_id,
user
)
if driver_password is None:
provider_password = salt.utils.cloud.retrieve_password_from_keyring(
credential_id.split(':')[0], # fallback to provider level
user)
if provider_password is None:
raise SaltCloudSystemExit(
"Unable to retrieve password from keyring for provider {0}".format(
__active_provider_name__
)
)
else:
actual_password = provider_password
else:
actual_password = driver_password
else:
actual_password = password
return driver(
user,
actual_password,
**authinfo
)
authinfo['ex_force_auth_version'] = '2.0_apikey'
log.debug('OpenStack authenticating using apikey')
return driver(
user,
config.get_cloud_config_value('apikey', vm_, __opts__,
search_global=False), **authinfo)
def preferred_ip(vm_, ips):
'''
Return the preferred Internet protocol. Either 'ipv4' (default) or 'ipv6'.
'''
proto = config.get_cloud_config_value(
'protocol', vm_, __opts__, default='ipv4', search_global=False
)
family = socket.AF_INET
if proto == 'ipv6':
family = socket.AF_INET6
for ip in ips:
try:
socket.inet_pton(family, ip)
return ip
except Exception:
continue
return False
def ignore_cidr(vm_, ip):
'''
Return True if we are to ignore the specified IP. Compatible with IPv4.
'''
if HAS_NETADDR is False:
log.error('Error: netaddr is not installed')
# If we cannot check, assume all is ok
return False
cidr = config.get_cloud_config_value(
'ignore_cidr', vm_, __opts__, default='', search_global=False
)
if cidr != '' and all_matching_cidrs(ip, [cidr]):
log.warning(
'IP \'{0}\' found within \'{1}\'; ignoring it.'.format(ip, cidr)
)
return True
return False
def ssh_interface(vm_):
'''
Return the ssh_interface type to connect to. Either 'public_ips' (default)
or 'private_ips'.
'''
return config.get_cloud_config_value(
'ssh_interface', vm_, __opts__, default='public_ips',
search_global=False
)
def rackconnect(vm_):
'''
Determine if we should wait for rackconnect automation before running.
Either 'False' (default) or 'True'.
'''
return config.get_cloud_config_value(
'rackconnect', vm_, __opts__, default='False',
search_global=False
)
def managedcloud(vm_):
'''
Determine if we should wait for the managed cloud automation before
running. Either 'False' (default) or 'True'.
'''
return config.get_cloud_config_value(
'managedcloud', vm_, __opts__, default='False',
search_global=False
)
def networks(vm_, kwargs=None):
conn = get_conn()
if kwargs is None:
kwargs = {}
floating = _assign_floating_ips(vm_, conn, kwargs)
vm_['floating'] = floating
def request_instance(vm_=None, call=None):
'''
Put together all of the information necessary to request an instance on Openstack
and then fire off the request the instance.
Returns data about the instance
'''
if call == 'function':
# Technically this function may be called other ways too, but it
# definitely cannot be called with --function.
raise SaltCloudSystemExit(
'The request_instance action must be called with -a or --action.'
)
salt.utils.cloud.check_name(vm_['name'], 'a-zA-Z0-9._-')
conn = get_conn()
kwargs = {
'name': vm_['name']
}
try:
kwargs['image'] = get_image(conn, vm_)
except Exception as exc:
raise SaltCloudSystemExit(
'Error creating {0} on OPENSTACK\n\n'
'Could not find image {1}: {2}\n'.format(
vm_['name'], vm_['image'], exc
)
)
try:
kwargs['size'] = get_size(conn, vm_)
except Exception as exc:
raise SaltCloudSystemExit(
'Error creating {0} on OPENSTACK\n\n'
'Could not find size {1}: {2}\n'.format(
vm_['name'], vm_['size'], exc
)
)
# Note: This currently requires libcloud trunk
avz = config.get_cloud_config_value(
'availability_zone', vm_, __opts__, default=None, search_global=False
)
if avz is not None:
kwargs['ex_availability_zone'] = avz
kwargs['ex_keyname'] = config.get_cloud_config_value(
'ssh_key_name', vm_, __opts__, search_global=False
)
security_groups = config.get_cloud_config_value(
'security_groups', vm_, __opts__, search_global=False
)
if security_groups is not None:
vm_groups = security_groups.split(',')
avail_groups = conn.ex_list_security_groups()
group_list = []
for vmg in vm_groups:
if vmg in [ag.name for ag in avail_groups]:
group_list.append(vmg)
else:
raise SaltCloudNotFound(
'No such security group: \'{0}\''.format(vmg)
)
kwargs['ex_security_groups'] = [
g for g in avail_groups if g.name in group_list
]
floating = _assign_floating_ips(vm_, conn, kwargs)
vm_['floating'] = floating
files = config.get_cloud_config_value(
'files', vm_, __opts__, search_global=False
)
if files:
kwargs['ex_files'] = {}
for src_path in files:
with salt.utils.files.fopen(files[src_path], 'r') as fp_:
kwargs['ex_files'][src_path] = fp_.read()
userdata_file = config.get_cloud_config_value(
'userdata_file', vm_, __opts__, search_global=False, default=None
)
if userdata_file is not None:
try:
with salt.utils.files.fopen(userdata_file, 'r') as fp_:
kwargs['ex_userdata'] = salt.utils.cloud.userdata_template(
__opts__, vm_, fp_.read()
)
except Exception as exc:
log.exception(
'Failed to read userdata from %s: %s', userdata_file, exc)
config_drive = config.get_cloud_config_value(
'config_drive', vm_, __opts__, default=None, search_global=False
)
if config_drive is not None:
kwargs['ex_config_drive'] = config_drive
event_kwargs = {
'name': kwargs['name'],
'image': kwargs['image'].name,
'size': kwargs['size'].name,
'profile': vm_['profile'],
}
__utils__['cloud.fire_event'](
'event',
'requesting instance',
'salt/cloud/{0}/requesting'.format(vm_['name']),
args={
'kwargs': __utils__['cloud.filter_event']('requesting', event_kwargs, list(event_kwargs)),
},
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
default_profile = {}
if 'profile' in vm_ and vm_['profile'] is not None:
default_profile = {'profile': vm_['profile']}
kwargs['ex_metadata'] = config.get_cloud_config_value(
'metadata', vm_, __opts__, default=default_profile, search_global=False
)
if not isinstance(kwargs['ex_metadata'], dict):
raise SaltCloudConfigError('\'metadata\' should be a dict.')
try:
data = conn.create_node(**kwargs)
except Exception as exc:
raise SaltCloudSystemExit(
'Error creating {0} on OpenStack\n\n'
'The following exception was thrown by libcloud when trying to '
'run the initial deployment: {1}\n'.format(
vm_['name'], exc
)
)
vm_['password'] = data.extra.get('password', None)
return data, vm_
def _query_node_data(vm_, data, floating, conn):
try:
node = show_instance(vm_['name'], 'action')
log.debug(
'Loaded node data for {0}:\n{1}'.format(
vm_['name'],
pprint.pformat(node)
)
)
except Exception as err:
log.error(
'Failed to get nodes list: {0}'.format(
err
),
# Show the traceback if the debug logging level is enabled
exc_info_on_loglevel=logging.DEBUG
)
# Trigger a failure in the wait for IP function
return False
running = node['state'] == NodeState.RUNNING
if not running:
# Still not running, trigger another iteration
return
if rackconnect(vm_) is True:
check_libcloud_version((0, 14, 0), why='rackconnect: True')
extra = node.get('extra')
rc_status = extra.get('metadata', {}).get(
'rackconnect_automation_status', '')
access_ip = extra.get('access_ip', '')
if rc_status != 'DEPLOYED':
log.debug('Waiting for Rackconnect automation to complete')
return
if managedcloud(vm_) is True:
extra = node.get('extra')
mc_status = extra.get('metadata', {}).get(
'rax_service_level_automation', '')
if mc_status != 'Complete':
log.debug('Waiting for managed cloud automation to complete')
return
public = node['public_ips']
if floating:
try:
name = data.name
ip = floating[0].ip_address
conn.ex_attach_floating_ip_to_node(data, ip)
log.info(
'Attaching floating IP \'{0}\' to node \'{1}\''.format(
ip, name
)
)
data.public_ips.append(ip)
public = data.public_ips
except Exception:
# Note(pabelanger): Because we loop, we only want to attach the
# floating IP address one. So, expect failures if the IP is
# already attached.
pass
result = []
private = node['private_ips']
if private and not public:
log.warning(
'Private IPs returned, but not public... Checking for '
'misidentified IPs'
)
for private_ip in private:
private_ip = preferred_ip(vm_, [private_ip])
if private_ip is False:
continue
if salt.utils.cloud.is_public_ip(private_ip):
log.warning('{0} is a public IP'.format(private_ip))
data.public_ips.append(private_ip)
log.warning(
'Public IP address was not ready when we last checked.'
' Appending public IP address now.'
)
public = data.public_ips
else:
log.warning('{0} is a private IP'.format(private_ip))
ignore_ip = ignore_cidr(vm_, private_ip)
if private_ip not in data.private_ips and not ignore_ip:
result.append(private_ip)
if rackconnect(vm_) is True and ssh_interface(vm_) != 'private_ips':
data.public_ips = access_ip
return data
# populate return data with private_ips
# when ssh_interface is set to private_ips and public_ips exist
if not result and ssh_interface(vm_) == 'private_ips':
for private_ip in private:
ignore_ip = ignore_cidr(vm_, private_ip)
if private_ip not in data.private_ips and not ignore_ip:
result.append(private_ip)
if result:
log.debug('result = {0}'.format(result))
data.private_ips = result
if ssh_interface(vm_) == 'private_ips':
return data
if public:
data.public_ips = public
if ssh_interface(vm_) != 'private_ips':
return data
def create(vm_):
'''
Create a single VM from a data dict
'''
try:
# Check for required profile parameters before sending any API calls.
if vm_['profile'] and config.is_profile_configured(__opts__,
__active_provider_name__ or 'openstack',
vm_['profile'],
vm_=vm_) is False:
return False
except AttributeError:
pass
deploy = config.get_cloud_config_value('deploy', vm_, __opts__)
key_filename = config.get_cloud_config_value(
'ssh_key_file', vm_, __opts__, search_global=False, default=None
)
if key_filename is not None:
key_filename = os.path.expanduser(key_filename)
if not os.path.isfile(key_filename):
raise SaltCloudConfigError(
'The defined ssh_key_file \'{0}\' does not exist'.format(
key_filename
)
)
vm_['key_filename'] = key_filename
__utils__['cloud.fire_event'](
'event',
'starting create',
'salt/cloud/{0}/creating'.format(vm_['name']),
args=__utils__['cloud.filter_event']('creating', vm_, ['name', 'profile', 'provider', 'driver']),
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
conn = get_conn()
if 'instance_id' in vm_:
# This was probably created via another process, and doesn't have
# things like salt keys created yet, so let's create them now.
if 'pub_key' not in vm_ and 'priv_key' not in vm_:
log.debug('Generating minion keys for \'{0[name]}\''.format(vm_))
vm_['priv_key'], vm_['pub_key'] = salt.utils.cloud.gen_keys(
salt.config.get_cloud_config_value(
'keysize',
vm_,
__opts__
)
)
data = conn.ex_get_node_details(vm_['instance_id'])
if vm_['key_filename'] is None and 'change_password' in __opts__ and __opts__['change_password'] is True:
vm_['password'] = salt.utils.pycrypto.secure_password()
conn.ex_set_password(data, vm_['password'])
networks(vm_)
else:
# Put together all of the information required to request the instance,
# and then fire off the request for it
data, vm_ = request_instance(vm_)
# Pull the instance ID, valid for both spot and normal instances
vm_['instance_id'] = data.id
try:
data = salt.utils.cloud.wait_for_ip(
_query_node_data,
update_args=(vm_, data, vm_['floating'], conn),
timeout=config.get_cloud_config_value(
'wait_for_ip_timeout', vm_, __opts__, default=10 * 60),
interval=config.get_cloud_config_value(
'wait_for_ip_interval', vm_, __opts__, default=10),
)
except (SaltCloudExecutionTimeout, SaltCloudExecutionFailure) as exc:
try:
# It might be already up, let's destroy it!
destroy(vm_['name'])
except SaltCloudSystemExit:
pass
finally:
raise SaltCloudSystemExit(str(exc))
log.debug('VM is now running')
if ssh_interface(vm_) == 'private_ips':
ip_address = preferred_ip(vm_, data.private_ips)
elif rackconnect(vm_) is True and ssh_interface(vm_) != 'private_ips':
ip_address = data.public_ips
else:
ip_address = preferred_ip(vm_, data.public_ips)
log.debug('Using IP address {0}'.format(ip_address))
if salt.utils.cloud.get_salt_interface(vm_, __opts__) == 'private_ips':
salt_ip_address = preferred_ip(vm_, data.private_ips)
log.info('Salt interface set to: {0}'.format(salt_ip_address))
else:
salt_ip_address = preferred_ip(vm_, data.public_ips)
log.debug('Salt interface set to: {0}'.format(salt_ip_address))
if not ip_address:
raise SaltCloudSystemExit('A valid IP address was not found')
vm_['salt_host'] = salt_ip_address
vm_['ssh_host'] = ip_address
ret = __utils__['cloud.bootstrap'](vm_, __opts__)
ret.update(data.__dict__)
if hasattr(data, 'extra') and 'password' in data.extra:
del data.extra['password']
log.info('Created Cloud VM \'{0[name]}\''.format(vm_))
log.debug(
'\'{0[name]}\' VM creation details:\n{1}'.format(
vm_, pprint.pformat(data.__dict__)
)
)
__utils__['cloud.fire_event'](
'event',
'created instance',
'salt/cloud/{0}/created'.format(vm_['name']),
args=__utils__['cloud.filter_event']('created', vm_, ['name', 'profile', 'provider', 'driver']),
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
return ret
def _assign_floating_ips(vm_, conn, kwargs):
floating = []
nets = config.get_cloud_config_value(
'networks', vm_, __opts__, search_global=False
)
if HAS014:
if nets is not None:
for net in nets:
if 'fixed' in net:
kwargs['networks'] = [
OpenStackNetwork(n, None, None, None)
for n in net['fixed']
]
elif 'floating' in net:
pool = OpenStack_1_1_FloatingIpPool(
net['floating'], conn.connection
)
try:
floating.append(pool.create_floating_ip())
except Exception as e:
log.debug('Cannot allocate IP from floating pool \'%s\'. Checking for unassociated ips.',
net['floating'])
for idx in pool.list_floating_ips():
if idx.node_id is None:
floating.append(idx)
break
if not floating:
raise SaltCloudSystemExit(
'There are no more floating IP addresses '
'available, please create some more'
)
# otherwise, attempt to obtain list without specifying pool
# this is the same as 'nova floating-ip-list'
elif ssh_interface(vm_) != 'private_ips':
try:
# This try/except is here because it appears some
# OpenStack providers return a 404 Not Found for the
# floating ip pool URL if there are no pools setup
pool = OpenStack_1_1_FloatingIpPool(
'', conn.connection
)
try:
floating.append(pool.create_floating_ip())
except Exception as e:
log.debug('Cannot allocate IP from the default floating pool. Checking for unassociated ips.')
for idx in pool.list_floating_ips():
if idx.node_id is None:
floating.append(idx)
break
if not floating:
log.warning(
'There are no more floating IP addresses '
'available, please create some more if necessary'
)
except Exception as e:
if str(e).startswith('404'):
pass
else:
raise
return floating