lxc: Global doc refresh

Signed-off-by: Mathieu Le Marec - Pasquet <kiorky@cryptelium.net>
This commit is contained in:
Mathieu Le Marec - Pasquet 2015-05-22 18:06:04 +02:00
parent 61ed2f5e76
commit ce11d8352e
3 changed files with 142 additions and 79 deletions

View file

@ -10,32 +10,43 @@ and possibly remote minion.
In other words, Salt will connect to a minion, then from that minion:
- Provision and configure a container for networking access
- Use **lxc** and **config** module to deploy salt and re-attach to master
- Use those modules to deploy salt and re-attach to master.
- :mod:`lxc runner <salt.runners.lxc>`
- :mod:`lxc module <salt.modules.lxc>`
- :mod:`seed <salt.modules.config>`
Limitations
------------
- You can only act on one minion and one provider at a time.
- Listing images must be targeted to a particular LXC provider (nothing will be
outputted with ``all``)
.. warning::
On pre **2015.5.2**, you need to specify explitly the network bridge
Operation
---------
Salt's LXC support does not use lxc.init. This enables it to tie minions
to a master in a more generic fashion (if any masters are defined)
and allows other custom association code.
Salt's LXC support does use :mod:`lxc.init <salt.modules.lxc.init>`
via the :mod:`lxc.cloud_init_interface <salt.modules.lxc.cloud_init_interface>`
and seeds the minion via :mod:`seed.mkconfig <salt.modules.seed.mkconfig>`.
You can provide to those lxc VMs a profile and a network profile like if
you were directly using the minion module.
Order of operation:
- Create the LXC container using :mod:`the LXC execution module
<salt.modules.lxc>` on the desired minion (clone or template)
- Create the LXC container on the desired minion (clone or template)
- Change LXC config options (if any need to be changed)
- Start container
- Change base passwords if any
- Change base DNS configuration if necessary
- Wait for LXC container to be up and ready for ssh
- Test SSH connection and bailout in error
- Via SSH (with the help of saltify), upload deploy script and seeds,
then re-attach the minion.
- Upload deploy script and seeds, then re-attach the minion.
Provider configuration
@ -54,18 +65,24 @@ Here is a simple provider configuration:
Profile configuration
---------------------
Please read :ref:`tutorial-lxc` before anything else.
And specially :ref:`tutorial-lxc-profiles`.
Here are the options to configure your containers:
.. code-block:: text
``target``
target
Host minion id to install the lxc Container into
``profile``
Name of the profile containing the LXC configuration defaults
``network_profile``
Name of the profile containing the LXC network configuration defaults
``nic_opts``
Per interface new-style configuration options mappings::
lxc_profile
Name of the profile or inline options for the LXC vm creation/cloning,
please see :ref:`tutorial-lxc-profiles-container`.
network_profile
Name of the profile or inline options for the LXC vm network settings,
please see :ref:`tutorial-lxc-profiles-network`.
nic_opts
Totally optionnal.
Per interface new-style configuration options mappings which will
override any profile default option::
eth0: {'mac': '00:16:3e:01:29:40',
'gateway': None, (default)
@ -74,78 +91,84 @@ Here are the options to configure your containers:
'netmask': '', (default)
'ip': '22.1.4.25'}}
Container creation/clone options:
Create a container by cloning:
``from_container``
Name of an original container using clone
``snapshot``
Do we use snapshots on cloned filesystems
Create a container from scratch using an LXC template:
image
template to use
backing
Backing store type (None, lvm, brtfs)
lvname
LVM logical volume name, if any
fstype
Type of filesystem
size
Size of the containera (for brtfs, or lvm)
vgname
LVM Volume Group name, if any
users
Names of the users to be pre-created inside the container
ssh_username
Username of the SSH systems administrator inside the container
sudo
Do we use sudo
ssh_gateway
if the minion is not in your 'topmaster' network, use
that gateway to connect to the lxc container.
This may be the public ip of the hosting minion
ssh_gateway_key
When using gateway, the ssh key of the gateway user (passed to saltify)
ssh_gateway_port
When using gateway, the ssh port of the gateway (passed to saltify)
ssh_gateway_user
When using gateway, user to login as via SSH (passed to saltify)
password
password for root and sysadmin (see "users" parameter above)
mac
mac address to assign to the container's network interface
ip
IP address to assign to the container's network interface
netmask
netmask for the network interface's IP
bridge
bridge under which the container's network interface will be enslaved
password for root and sysadmin users
dnsservers
List of DNS servers to use--this is optional. If present, DNS
servers will be restricted to that list if used
lxc_conf_unset
Configuration variables to unset in this container's LXC configuration
lxc_conf
LXC configuration variables to add in this container's LXC configuration
List of DNS servers to use. This is optional.
minion
minion configuration (see :doc:`Minion Configuration in Salt Cloud </topics/cloud/config>`)
Using profiles:
.. code-block:: yaml
# Note: This example would go in /etc/salt/cloud.profiles or any file in the
# /etc/salt/cloud.profiles.d/ directory.
devhost10-lxc:
provider: devhost10-lxc
lxc_profile: foo
network_profile: bar
minion:
master: 10.5.0.1
master_port: 4506
Using inline profiles (eg to override the network bridge):
.. code-block:: yaml
devhost11-lxc:
provider: devhost10-lxc
lxc_profile:
clone_from: foo
network_profile:
etho:
link: lxcbr0
minion:
master: 10.5.0.1
master_port: 4506
Template instead of a clone:
.. code-block:: yaml
devhost11-lxc:
provider: devhost10-lxc
lxc_profile:
template: ubuntu
network_profile:
etho:
link: lxcbr0
minion:
master: 10.5.0.1
master_port: 4506
Static ip:
.. code-block:: yaml
# Note: This example would go in /etc/salt/cloud.profiles or any file in the
# /etc/salt/cloud.profiles.d/ directory.
devhost10-lxc:
provider: devhost10-lxc
nic_opts:
eth0:
ipv4: 10.0.3.9
minion:
master: 10.5.0.1
master_port: 4506
DHCP:
.. code-block:: yaml
# Note: This example would go in /etc/salt/cloud.profiles or any file in the
# /etc/salt/cloud.profiles.d/ directory.
devhost10-lxc:
provider: devhost10-lxc
from_container: ubuntu
backing: lvm
sudo: True
size: 3g
ip: 10.0.3.9
minion:
master: 10.5.0.1
master_port: 4506
lxc_conf:
- lxc.utsname: superlxc
Driver Support
--------------

View file

@ -63,9 +63,16 @@ Halite
halite
LXC
===
.. toctree::
:maxdepth: 2
lxc
Using Salt at scale
===================
.. toctree::
:maxdepth: 2
intro_scale
intro_scale

View file

@ -14,7 +14,9 @@ LXC Management with Salt
Some features are only currently available in the ``develop`` branch, and
are new in the upcoming 2015.5.0 release. These new features will be
clearly labeled.
Even in 2015.5 release, you will need up to the last changeset of this
stable branch for the salt-cloud stuff to work correctly.
Dependencies
============
@ -48,8 +50,10 @@ order. This allows for profiles to be defined centrally in the master config
file, with several options for overriding them (if necessary) on groups of
minions or individual minions.
There are two types of profiles, one for defining the parameters used in
container creation, and one for defining the container's network interface(s).
There are two types of profiles:
- One for defining the parameters used in container creation/clone.
- One for defining the container's network interface(s) settings.
.. _tutorial-lxc-profiles-container:
@ -143,9 +147,15 @@ Parameter 2015.5.0 and Newer 2014.7.x and Earlier
Network Profiles
----------------
LXC network profiles are defined defined underneath the ``lxc.network_profile``
config option:
config option.
By default, the module uses a DHCP based configuration and try to guess a bridge to
get connectivity.
.. warning::
on pre **2015.5.2**, you need to specify explitly the network bridge
.. code-block:: yaml
@ -227,6 +237,22 @@ container-by-container basis, for instance using the ``nic_opts`` argument to
conflict with static IP addresses set at the container level.
Old lxc support (<1.0.7)
---------------------------
With saltstack **2015.5.2** and above, normally the setting is autoselected, but
before, you ll need to teach your network profile to set
**lxc.network.ipv4.gateway** to **auto** when using a classic ipv4 configuration.
Thus you ll need
.. code-block:: yaml
lxc.network_profile.foo:
etho:
link: lxcbr0
ipv4.gateway: auto
Creating a Container on the CLI
===============================
@ -464,6 +490,13 @@ To run a command and return all information:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=True stderr=True
Container Management Using salt-cloud
========================================
Salt cloud uses under the hood the salt runner and module to manage containers,
Please look at :ref:`this chapter <config_lxc>`
Container Management Using States
=================================