From aaca103604b159c406dbf1d4074abca1cc1035e3 Mon Sep 17 00:00:00 2001 From: "Daniel A. Wozniak" Date: Wed, 6 Sep 2023 16:56:41 -0700 Subject: [PATCH] wip docs for master cluster --- doc/topics/highavailability/index.rst | 15 ++++ doc/topics/tutorials/master-cluster.rst | 98 +++++++++++++++++++++++++ 2 files changed, 113 insertions(+) create mode 100644 doc/topics/tutorials/master-cluster.rst diff --git a/doc/topics/highavailability/index.rst b/doc/topics/highavailability/index.rst index 2b7bd48a3ea..4f39dd598a6 100644 --- a/doc/topics/highavailability/index.rst +++ b/doc/topics/highavailability/index.rst @@ -8,6 +8,21 @@ Salt supports several features for high availability and fault tolerance. Brief documentation for these features is listed alongside their configuration parameters in :ref:`Configuration file examples `. + +Master Cluster +============== + +.. versionadded:: 3007 + +Salt masters can be configured to act as a cluster. All masters in a cluster +are peers. Job workloads are shared accross the cluster. Master clusters +provide a way to scale masters horrizontally. They do not require changes to +the minions' configuration to add more resources. Cluster implimentations +expected to use a load balancer, shared filesystem, and reliable network. + +:ref:`Master Cluster Tutorial ` + + Multimaster =========== diff --git a/doc/topics/tutorials/master-cluster.rst b/doc/topics/tutorials/master-cluster.rst new file mode 100644 index 00000000000..a4419d3e3f7 --- /dev/null +++ b/doc/topics/tutorials/master-cluster.rst @@ -0,0 +1,98 @@ +.. _tutorial-master-cluster: + + +============== +Master Cluster +============== + +A clustered Salt Master has several advantages over Salt's traditional High +Availability options. First, a master cluster is meant to be served behind a +load balancer. Minions only need to know about the load balancer's IP address. +Therefore, masters can be added and removed from a cluster without the need to +re-configure minions. Another major benefit of master clusters over Salt's +older HA implimentations is that Masters in a cluster share the load of all +jobs. This allows Salt administrators to more easily scale their environments +to handle larger numbers of minions and larger jobs. + +Minimum Requirements +==================== + +Running a cluster master requires all nodes in the cluster to have a shared +filesystem. The `cluster_pki_dir`, `cache_dir`, `file_roots` and `pillar_roots` +must all be on a shared filesystem. Most implimentations will also server the +masters publish and request server ports via a tcp load balancer. All of the +masters in a cluster are assumed to be running on a reliable local area +network. + +Each master in a cluster maintains it's own public and private key, and an in +memory aes key. Each cluster peer also has access to the `cluster_pki_dir` +where a cluster wide public and private key are stored. In addition the cluster +wide aes key is generated and stored in the `cluster_pki_dir`. In addition, +when operation as a cluster, minion keys are stored in the `cluster_pki_dir` +instead of the master's `pki_dir`. + + +Reference Implimentation +======================== + +Gluster: https://docs.gluster.org/en/main/Quick-Start-Guide/Quickstart/ + +HAProxy: + +.. code-block:: text + + frontend salt-master-pub + mode tcp + bind 10.27.5.116:4505 + option tcplog + timeout client 1m + default_backend salt-master-pub-backend + + backend salt-master-pub-backend + mode tcp + option tcplog + #option log-health-checks + log global + #balance source + balance roundrobin + timeout connect 10s + timeout server 1m + server rserve1 10.27.12.13:4505 check + server rserve2 10.27.7.126:4505 check + server rserve3 10.27.3.73:4505 check + + frontend salt-master-req + mode tcp + bind 10.27.5.116:4506 + option tcplog + timeout client 1m + default_backend salt-master-req-backend + + backend salt-master-req-backend + mode tcp + option tcplog + #option log-health-checks + log global + balance roundrobin + #balance source + timeout connect 10s + timeout server 1m + server rserve1 10.27.12.13:4506 check + server rserve2 10.27.7.126:4506 check + server rserve3 10.27.3.73:4506 check + +Master Config: + +.. code-block:: yaml + + id: 10.27.12.13 + cluster_id: master_cluster + cluster_peers: + - 10.27.7.126 + - 10.27.3.73 + cluster_pki_dir: /my/gluster/share/pki + cache_dir: /my/gluster/share/cache + file_roots: + - /my/gluster/share/srv/salt + pillar_roots: + - /my/gluster/share/srv/pillar