mirror of
https://github.com/saltstack/salt.git
synced 2025-04-17 10:10:20 +00:00
Add performance expectations and recommendations to rest_cherrypy docs
This commit is contained in:
parent
e61b83c09e
commit
f1dc63f16c
3 changed files with 106 additions and 2 deletions
|
@ -1,3 +1,5 @@
|
|||
.. _returners:
|
||||
|
||||
=========
|
||||
Returners
|
||||
=========
|
||||
|
|
|
@ -439,6 +439,107 @@ Here is an example of sending urlencoded data:
|
|||
--data-urlencode password='1234+' \\
|
||||
-d eauth='pam'
|
||||
|
||||
Performance Expectations and Recommended Usage
|
||||
==============================================
|
||||
|
||||
This module provide a thin wrapper around :ref:`Salt's Python API
|
||||
<python-api>`. Executing a Salt command via rest_cherrypy is directly analogous
|
||||
to executing a Salt command via Salt's CLI (which also uses the Python API) --
|
||||
they share the same semantics, performance characteristics, and 98% of the same
|
||||
code. As a rule-of-thumb: if you wouldn't do it at the CLI don't do it via this
|
||||
API.
|
||||
|
||||
Long-Running HTTP Connections
|
||||
-----------------------------
|
||||
|
||||
The CherryPy server is a production-ready, threading HTTP server written in
|
||||
Python. Because it makes use of a thread pool to process HTTP requests it is
|
||||
not ideally suited to maintaining large numbers of concurrent, synchronous
|
||||
connections. On moderate hardware with default settings it should top-out at
|
||||
around 30 to 50 concurrent connections.
|
||||
|
||||
That number of long-running, synchronous Salt processes is also not ideal. Like
|
||||
at the CLI, each Salt command run will start a process that instantiates its
|
||||
own `LocalClient`, which instantiates its own listener to the Salt event bus,
|
||||
and sends out its own periodic `saltutil.find_job` queries to determine if a
|
||||
Minion is still running the command. Not exactly a lightweight operation.
|
||||
|
||||
Timeouts
|
||||
--------
|
||||
|
||||
In addition to the above resource overhead for long-running connections, there
|
||||
are the usual HTTP timeout semantics for the CherryPy server, any HTTP client
|
||||
being used, as well as any hardware in between such as proxies, gateways, or
|
||||
load balancers. rest_cherrypy can be configured not to time-out long responses
|
||||
via the ``expire_responses`` setting, and both :py:class:`LocalClient
|
||||
<salt.client.LocalClient>` and :py:class:`RunnerClient
|
||||
<salt.runner.RunnerClient>` have their own timeout parameters that may be
|
||||
passed as top-level keywords:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
curl -b /tmp/cookies.txt -sSi localhost:8000 \
|
||||
-H 'Content-type: application/json' \
|
||||
-d '
|
||||
[
|
||||
{
|
||||
"client": "local",
|
||||
"tgt": "*",
|
||||
"fun": "test.sleep",
|
||||
"kwarg": {"length": 30},
|
||||
"timeout": 60
|
||||
},
|
||||
{
|
||||
"client": "runner",
|
||||
"fun": "test.sleep",
|
||||
"kwarg": {"s_time": 30},
|
||||
"timeout": 60
|
||||
}
|
||||
]
|
||||
'
|
||||
|
||||
Best Practices
|
||||
--------------
|
||||
|
||||
Given the performance overhead and HTTP timeouts for long-running operations
|
||||
described above, the most effective and most scalable way to use both Salt and
|
||||
salt-api is to run commands asynchronously using the ``local_async``,
|
||||
``runner_async``, and ``wheel_async`` clients.
|
||||
|
||||
Running async jobs results in being able to process 3x more commands per second
|
||||
for ``LocalClient`` and 17x more commands per second for ``RunnerClient``, in
|
||||
addition to much less network traffic and memory requirements. Job returns can
|
||||
be fetched from Salt's job cache via the ``/jobs/<jid>`` endpoint, or they can
|
||||
be collected into a data store using Salt's :ref:`Returner system <returners>`.
|
||||
|
||||
The ``/events`` endpoint is specifically designed to handle long-running HTTP
|
||||
connections and it exposes Salt's event bus which includes job returns.
|
||||
Watching this endpoint first, then executing asynchronous Salt commands second,
|
||||
is the most lightweight and scalable way to use ``rest_cherrypy`` while still
|
||||
receiving job returns in real-time. But this requires clients that can properly
|
||||
handle the inherent asynchronicity of that workflow.
|
||||
|
||||
Performance Tuning
|
||||
------------------
|
||||
|
||||
The ``thread_pool`` and ``socket_queue_size`` settings can be used to increase
|
||||
the capacity of rest_cherrypy to handle incoming requests. Keep an eye on RAM
|
||||
usage as well as available file handles while testing changes to these
|
||||
settings. As salt-api is a thin wrapper around Salt's Python API, also keep on
|
||||
eye on the performance of Salt when testing.
|
||||
|
||||
Future Plans
|
||||
------------
|
||||
|
||||
Now that Salt uses the Tornado concurrency library internally, we plan to
|
||||
improve performance in the API by taking advantage of existing processes and
|
||||
event listeners and to use lightweight coroutines to facilitate more
|
||||
simultaneous HTTP connections and better support for synchronous operations.
|
||||
That effort can be tracked in `issue 26505`__, but until that issue is closed
|
||||
rest_cherrypy will remain the officially recommended REST API.
|
||||
|
||||
.. __: https://github.com/saltstack/salt/issues/26505
|
||||
|
||||
.. |req_token| replace:: a session token from :py:class:`~Login`.
|
||||
.. |req_accept| replace:: the desired response format.
|
||||
.. |req_ct| replace:: the format of the request body.
|
||||
|
@ -2052,8 +2153,8 @@ class Events(object):
|
|||
very fast. If a job is created using a regular POST request, it is
|
||||
possible that the job return will be available on the SSE stream before
|
||||
the response for the POST request arrives. It is important to take that
|
||||
asynchronity into account when designing an application. Below are some
|
||||
general guidelines.
|
||||
asynchronicity into account when designing an application. Below are
|
||||
some general guidelines.
|
||||
|
||||
* Subscribe to the SSE stream _before_ creating any events.
|
||||
* Process SSE events directly as they arrive and don't wait for any
|
||||
|
|
|
@ -13,6 +13,7 @@ deployed one of two ways.
|
|||
The default configuration is to run this module using :program:`salt-api` to
|
||||
start the Python-based CherryPy server. This server is lightweight,
|
||||
multi-threaded, encrypted with SSL, and should be considered production-ready.
|
||||
See the section above for performance expectations.
|
||||
|
||||
Using a WSGI-compliant web server
|
||||
---------------------------------
|
||||
|
|
Loading…
Add table
Reference in a new issue