
Conflicts: - doc/ref/configuration/master.rst - doc/ref/modules/all/index.rst - doc/topics/grains/index.rst - doc/topics/releases/2016.3.4.rst - doc/topics/spm/spm_formula.rst - doc/topics/tutorials/cron.rst - doc/topics/tutorials/index.rst - doc/topics/tutorials/stormpath.rst - salt/engines/slack.py - salt/log/handlers/fluent_mod.py - salt/modules/cyg.py - salt/modules/junos.py - salt/modules/namecheap_dns.py - salt/modules/namecheap_domains.py - salt/modules/namecheap_ns.py - salt/modules/namecheap_ssl.py - salt/modules/namecheap_users.py - salt/modules/reg.py - salt/modules/tomcat.py - salt/modules/vault.py - salt/modules/win_file.py - salt/modules/zpool.py - salt/output/highstate.py - salt/renderers/pass.py - salt/runners/cache.py - salt/states/boto_apigateway.py - salt/states/boto_iam.py - salt/states/boto_route53.py - salt/states/msteams.py - salt/states/reg.py - salt/states/win_iis.py - tests/integration/modules/test_cmdmod.py - tests/integration/states/test_user.py - tests/support/helpers.py - tests/unit/cloud/clouds/test_openstack.py - tests/unit/fileserver/test_gitfs.py - tests/unit/modules/test_junos.py - tests/unit/pillar/test_git.py - tests/unit/states/test_win_path.py - tests/unit/test_pillar.py - tests/unit/utils/test_format_call.py - tests/unit/utils/test_utils.py - tests/unit/utils/test_warnings.py
13 KiB
Miscellaneous Salt Cloud Options
This page describes various miscellaneous options available in Salt Cloud
Deploy Script Arguments
Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script:
ec2-amazon:
provider: my-ec2-config
image: ami-1624987f
size: t1.micro
ssh_username: ec2-user
script: bootstrap-salt
script_args: -c /tmp/
This has also been tested to work with pipes, if needed:
script_args: '| head'
Selecting the File Transport
By default, Salt Cloud uses SFTP to transfer files to Linux hosts. However, if SFTP is not available, or specific SCP functionality is needed, Salt Cloud can be configured to use SCP instead.
file_transport: sftp
file_transport: scp
Sync After Install
Salt allows users to create custom modules, grains, and states which can be synchronised to minions to extend Salt with further functionality.
This option will inform Salt Cloud to synchronise your custom modules, grains, states or all these to the minion just after it has been created. For this to happen, the following line needs to be added to the main cloud configuration file:
sync_after_install: all
The available options for this setting are:
modules
grains
states
all
Setting Up New Salt Masters
It has become increasingly common for users to set up multi-hierarchal infrastructures using Salt Cloud. This sometimes involves setting up an instance to be a master in addition to a minion. With that in mind, you can now lay down master configuration on a machine by specifying master options in the profile or map file.
make_master: True
This will cause Salt Cloud to generate master keys for the instance, and tell salt-bootstrap to install the salt-master package, in addition to the salt-minion package.
The default master configuration is usually appropriate for most users, and will not be changed unless specific master configuration has been added to the profile or map:
master:
user: root
interface: 0.0.0.0
Setting Up a Salt Syndic with Salt Cloud
In addition to setting up new
Salt Masters, syndics <syndic>
can also be provisioned using
Salt Cloud. In order to set up a Salt Syndic via Salt Cloud, a Salt
Master needs to be installed on the new machine and a master
configuration file needs to be set up using the make_master
setting. This setting can be defined either in a profile config file or
in a map file:
make_master: True
To install the Salt Syndic, the only other specification that needs
to be configured is the syndic_master
key to specify the
location of the master that the syndic will be reporting to. This
modification needs to be placed in the master
setting,
which can be configured either in the profile, provider, or
/etc/salt/cloud
config file:
master:
syndic_master: 123.456.789 # may be either an IP address or a hostname
Many other Salt Syndic configuration settings and specifications can
be passed through to the new syndic machine via the master
configuration setting. See the syndic
documentation for more information.
SSH Port
By default ssh port is set to port 22. If you want to use a custom port in provider, profile, or map blocks use ssh_port option.
2015.5.0
ssh_port: 2222
SSH Port
By default ssh port is set to port 22. If you want to use a custom port in provider, profile, or map blocks use ssh_port option.
ssh_port: 2222
Delete SSH Keys
When Salt Cloud deploys an instance, the SSH pub key for the instance is added to the known_hosts file for the user that ran the salt-cloud command. When an instance is deployed, a cloud host generally recycles the IP address for the instance. When Salt Cloud attempts to deploy an instance using a recycled IP address that has previously been accessed from the same machine, the old key in the known_hosts file will cause a conflict.
In order to mitigate this issue, Salt Cloud can be configured to remove old keys from the known_hosts file when destroying the node. In order to do this, the following line needs to be added to the main cloud configuration file:
delete_sshkeys: True
Keeping /tmp/ Files
When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added:
salt-cloud -p myprofile mymachine --keep-tmp
For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable).
Hide Output From Minion Install
By default Salt Cloud will stream the output from the minion deploy script directly to STDOUT. Although this can been very useful, in certain cases you may wish to switch this off. The following config option is there to enable or disable this output:
display_ssh_output: False
Connection Timeout
There are several stages when deploying Salt where Salt Cloud needs to wait for something to happen. The VM getting it's IP address, the VM's SSH port is available, etc.
If you find that the Salt Cloud defaults are not enough and your deployment fails because Salt Cloud did not wait log enough, there are some settings you can tweak.
Note
All settings should be provided in lowercase All values should be provided in seconds
You can tweak these settings globally, per cloud provider, or event per profile definition.
wait_for_ip_timeout
The amount of time Salt Cloud should wait for a VM to start and get an IP back from the cloud host. Default: varies by cloud provider ( between 5 and 25 minutes)
wait_for_ip_interval
The amount of time Salt Cloud should sleep while querying for the VM's IP. Default: varies by cloud provider ( between .5 and 10 seconds)
ssh_connect_timeout
The amount of time Salt Cloud should wait for a successful SSH connection to the VM. Default: varies by cloud provider (between 5 and 15 minutes)
wait_for_passwd_timeout
The amount of time until an ssh connection can be established via password or ssh key. Default: varies by cloud provider (mostly 15 seconds)
wait_for_passwd_maxtries
The number of attempts to connect to the VM until we abandon. Default: 15 attempts
wait_for_fun_timeout
Some cloud drivers check for an available IP or a successful SSH connection using a function, namely, SoftLayer, and SoftLayer-HW. So, the amount of time Salt Cloud should retry such functions before failing. Default: 15 minutes.
wait_for_spot_timeout
The amount of time Salt Cloud should wait before an EC2 Spot instance is available. This setting is only available for the EC2 cloud driver. Default: 10 minutes
Salt Cloud Cache
Salt Cloud can maintain a cache of node data, for supported providers. The following options manage this functionality.
update_cachedir
On supported cloud providers, whether or not to maintain a cache of
nodes returned from a --full-query. The data will be stored in
msgpack
format under
<SALT_CACHEDIR>/cloud/active/<DRIVER>/<PROVIDER>/<NODE_NAME>.p
.
This setting can be True or False.
diff_cache_events
When the cloud cachedir is being managed, if differences are encountered between the data that is returned live from the cloud host and the data in the cache, fire events which describe the changes. This setting can be True or False.
Some of these events will contain data which describe a node. Because
some of the fields returned may contain sensitive data, the
cache_event_strip_fields
configuration option exists to
strip those fields from the event return.
cache_event_strip_fields:
- password
- priv_key
The following are events that can be fired based on this data.
salt/cloud/minionid/cache_node_new
A new node was found on the cloud host which was not listed in the cloud cachedir. A dict describing the new node will be contained in the event.
salt/cloud/minionid/cache_node_missing
A node that was previously listed in the cloud cachedir is no longer available on the cloud host.
salt/cloud/minionid/cache_node_diff
One or more pieces of data in the cloud cachedir has changed on the cloud host. A dict containing both the old and the new data will be contained in the event.
SSH Known Hosts
Normally when bootstrapping a VM, salt-cloud will ignore the SSH host
key. This is because it does not know what the host key is before
starting (because it doesn't exist yet). If strict host key checking is
turned on without the key in the known_hosts
file, then the
host will never be available, and cannot be bootstrapped.
If a provider is able to determine the host key before trying to
bootstrap it, that provider's driver can add it to the
known_hosts
file, and then turn on strict host key
checking. This can be set up in the main cloud configuration file
(normally /etc/salt/cloud
) or in the provider-specific
configuration file:
known_hosts_file: /path/to/.ssh/known_hosts
If this is not set, it will default to /dev/null
, and
strict host key checking will be turned off.
It is highly recommended that this option is not set, unless the user has verified that the provider supports this functionality, and that the image being used is capable of providing the necessary information. At this time, only the EC2 driver supports this functionality.
SSH Agent
2015.5.0
If the ssh key is not stored on the server salt-cloud is being run on, set ssh_agent, and salt-cloud will use the forwarded ssh-agent to authenticate.
ssh_agent: True
File Map Upload
2014.7.0
The file_map
option allows an arbitrary group of files
to be uploaded to the target system before running the deploy script.
This functionality requires a provider uses
salt.utils.cloud.bootstrap(), which is currently limited to the ec2,
gce, openstack and nova drivers.
The file_map
can be configured globally in
/etc/salt/cloud
, or in any cloud provider or profile file.
For example, to upload an extra package or a custom deploy script, a
cloud profile using file_map
might look like:
ubuntu14:
provider: ec2-config
image: ami-98aa1cf0
size: t1.micro
ssh_username: root
securitygroup: default
file_map:
/local/path/to/custom/script: /remote/path/to/use/custom/script
/local/path/to/package: /remote/path/to/store/package
Running Pre-Flight Commands
2018.3.0
To execute specified preflight shell commands on a VM before the
deploy script is run, use the preflight_cmds
option. These
must be defined as a list in a cloud configuration file. For
example:
my-cloud-profile:
provider: linode-config
image: Ubuntu 16.04 LTS
size: Linode 2048
preflight_cmds:
- whoami
- echo 'hello world!'
These commands will run in sequence before the bootstrap script is executed.
Force Minion Config
2018.3.0
The force_minion_config
option requests the bootstrap
process to overwrite an existing minion configuration file and
public/private key files. Default: False
This might be important for drivers (such as saltify
)
which are expected to take over a connection from a former salt
master.
my_saltify_provider:
driver: saltify
force_minion_config: true