The documentation you're currently reading is for version 3.5.0. Click here to view documentation for the latest stable version.

StackStorm HA Cluster in Kubernetes - BETA

This document provides an installation blueprint for a Highly Available StackStorm cluster based on Kubernetes, a container orchestration platform at planet scale.

The cluster deploys a minimum of 2 replicas for each component of StackStorm microservices for redundancy and reliability. It also configures backends like MongoDB HA Replicaset, RabbitMQ HA and Redis Sentinel cluster that st2 relies on for database, communication bus, and distributed coordination respectively. That raises a fleet of more than 30 pods total.

The source code for K8s resource templates is available as a GitHub repo: StackStorm/stackstorm-ha.


Beta quality! As this deployment method available in beta version, documentation and code may be substantially modified and refactored.



This document assumes some basic knowledge of Kubernetes and Helm. Please refer to K8s and Helm documentation if you find any difficulties using these tools.

However, here are some minimal instructions to get started.


The StackStorm HA cluster is available as a Helm chart, a bundled K8s package which makes installing the complex StackStorm infrastructure as easy as:

# Add Helm StackStorm repository
helm repo add stackstorm

helm install stackstorm/stackstorm-ha

Once the deployment is finished, it will show you the first steps to get started working with the new cluster via WebUI or st2 CLI client:


The installation uses some unsafe defaults which we recommend you change for production use via Helm values.yaml.

Helm Values

Helm package stackstorm-ha comes with default settings (see values.yaml). Fine-tune them to achieve desired configuration for the StackStorm HA K8s cluster.


Keep custom values you want to override in a separated yaml file so they won’t get lost. Example: helm install -f custom_values.yaml or helm upgrade -f custom_values.yaml

You can configure:

  • number of replicas for each component

  • st2 auth secrets

  • st2.conf settings

  • RBAC roles, assignments and mappings (enterprise only for StackStorm v3.2 and before, open source for StackStorm v3.4 and later)

  • custom st2 packs and its configs

  • SSH private key

  • K8s resources and settings to control pod/deployment placement

  • Mongo, RabbitMQ clusters


It’s highly recommended to set your own secrets as the file contains unsafe defaults like SSH keys, StackStorm access credentials and MongoDB/RabbitMQ passwords!


After making changes to Helm values, upgrade the cluster:

helm repo update
helm upgrade <release-name> stackstorm/stackstorm-ha

It will redeploy components which were affected by the change, taking care to keep the desired number of replicas to sustain every service alive during the rolling upgrade.

Enterprise (EWC)

For StackStorm versions earlier than 3.3, Extreme Networks provided a commercial version of the StackStorm automation platform (EWC). EWC added priority support, advanced features such as fine-tuned access control (RBAC), LDAP, and Workflow Designer.

As these enterprise features were donated to the Linux Foundation, RBAC, LDAP and Workflow Designer components are now available in StackStorm Open Source since 3.4. The Workflow Designer is integrated into the StackStorm Web UI, and RBAC and LDAP components are installed (but not configured) with the default installation.

Tips & Tricks

Save custom Helm values you want to override in a separate file, upgrade the cluster:

helm upgrade -f custom_values.yaml <release-name> stackstorm/stackstorm-ha

Get all logs for entire StackStorm cluster with dependent services for Helm release:

kubectl logs -l release=<release-name>

Grab all logs only for stackstorm backend services, excluding st2web and DB/MQ/redis:

kubectl logs -l release=<release-name>,tier=backend

Custom st2 packs

To follow the stateless model, shipping custom st2 packs is now part of the deployment process. It means that st2 pack install won’t work in a distributed environment and you have to bundle all the required packs into a Docker image that you can codify, version, package and distribute in a repeatable way. The responsibility of this Docker image is to hold pack content and their virtualenvs. So the custom st2 pack docker image you have to build is essentially a couple of read-only directories that are shared with the corresponding st2 services in the cluster.

For your convenience, we created a new st2-pack-install <pack1> <pack2> <pack3> utility and included it in a container stackstorm/st2packs that will help to install custom packs during the Docker build process without relying on live DB and MQ connection.

For more detailed instructions see StackStorm/st2packs-dockerfiles on how to build your custom st2packs image.

Please refer to StackStorm/stackstorm-ha#install-custom-st2-packs-in-the-cluster Helm chart repository with more information about how to reference custom st2pack Docker image in Helm values, providing packs configs, using private Docker registry and more.


There is an alternative approach, - sharing pack content via read-write-many NFS (Network File System) as High Availability Deployment recommends. As beta is in progress and both methods have their pros and cons, we’d like to hear your feedback and which way would work better for you.


Ingress is worth considering if you want to expose multiple services under the same IP address, and these services all use the same L7 protocol (typically HTTP). You only pay for one load balancer if you are using native cloud integration, and because Ingress is “smart”, you can get a lot of features out of the box (like SSL, Auth, Routing, etc.). See the ingress section in values.yaml for configuration details.

You will first need to deploy an ingress controller of your preference. See Additional Controllers for more information.


For HA reasons, by default and at a minimum StackStorm K8s cluster deploys more than 30 pods in total. This section describes their role and deployment specifics.

The Community FOSS Dockerfiles used to generate the docker images for each st2 component are available at StackStorm/st2-dockerfiles.


A helper container to switch into and run st2 CLI commands against the deployed StackStorm cluster. All resources like credentials, configs, RBAC, packs, keys and secrets are shared with this container.

# obtain st2client pod name
ST2CLIENT=$(kubectl get pod -l app=st2client -o jsonpath="{.items[0]}")

# run a single st2 client command
kubectl exec -it ${ST2CLIENT} -- st2 --version

# switch into a container shell and use st2 CLI
kubectl exec -it ${ST2CLIENT} /bin/bash


st2web is a StackStorm Web UI admin dashboard. By default, st2web K8s config includes a Pod Deployment and a Service. 2 replicas (configurable) of st2web serve the web app and proxy requests to st2auth, st2api, st2stream. By default, st2web uses HTTP instead of HTTPS. We recommend you rely on LoadBalancer or Ingress to add HTTPS layer on top of it.


By default, st2web is a NodePort Service and is not exposed to the public net. If your Kubernetes cluster setup supports the LoadBalancer service type, you can edit the corresponding helm values to configure st2web as a LoadBalancer service in order to expose it and the services it proxies to the public net.


All authentication is managed by the st2auth service. K8s configuration includes a Pod Deployment backed by 2 replicas by default and Service of type ClusterIP listening on port 9100. Multiple st2auth processes can be behind a load balancer in an active-active configuration. You can increase the number of replicas if required.


This service hosts the REST API endpoints that serve requests from WebUI, CLI, ChatOps and other st2 components. K8s configuration consists of Pod Deployment with 2 default replicas for HA and ClusterIP Service accepting HTTP requests on port 9101. This is one of the most important StackStorm services. We recommend increasing the number of replicas to distribute load if you are planning a high-volume environment.


StackStorm st2stream - exposes a server-sent event stream, used by the clients like WebUI and ChatOps to receive updates from the st2stream server. Similar to st2auth and st2api, st2stream K8s configuration includes Pod Deployment with 2 replicas for HA (can be increased in values.yaml) and ClusterIP Service listening on port 9102.


st2rulesengine evaluates rules when it sees new triggers and decides if new action execution should be requested. K8s config includes Pod Deployment with 2 (configurable) replicas by default for HA.


st2timersengine is responsible for scheduling all user specified timers aka st2 cron. Only a single replica is created via K8s Deployment as timersengine can’t work in active-active mode at the moment (multiple timers will produce duplicated events) and it relies on K8s failover/reschedule capabilities to address cases of process failure.


st2workflowengine drives the execution of orquesta workflows and actually schedules actions to run by another component st2actionrunner. Multiple st2workflowengine processes can run in active-active mode and so minimum 2 K8s Deployment replicas are created by default. All the workflow engine processes will share the load and pick up more work if one or more of the processes become available.


Multiple st2notifier processes can run in active-active mode, using connections to RabbitMQ and MongoDB and generating triggers based on action execution completion as well as doing action rescheduling. In an HA deployment there must be a minimum of 2 replicas of st2notifier running, requiring a coordination backend, which in our case is redis.


st2sensorcontainer manages StackStorm sensors: It starts, stops and restarts them as subprocesses. By default, deployment is configured with 1 replica containing all the sensors.

st2sensorcontainer also supports a more Docker-friendly single-sensor-per-container mode as a way of Partitioning Sensors. This distributes the computing load between many pods and relies on K8s failover/reschedule mechanisms, instead of running everything on a single instance of st2sensorcontainer. The sensor(s) must be deployed as part of the custom packs image.

As an example, override the default Helm values as follows:

      - name: github
        ref: githubwebhook.GitHubWebhookSensor
      - name: circleci
        ref: circle_ci.CircleCIWebhookSensor


Stackstorm workers that actually execute actions. 5 replicas for K8s Deployment are configured by default to increase StackStorm ability to execute actions without excessive queuing. Relies on redis for coordination. This is likely the first thing to lift if you have a lot of actions to execute per time period in your StackStorm cluster.


st2scheduler is responsible for handling ingress action execution requests.

2 replicas for K8s Deployment are configured by default to increase StackStorm scheduling throughput. Relies on database versioning for coordination.


Service that cleans up old executions and other operations data based on setup configurations. Having 1 st2garbagecollector replica for K8s Deployment is enough, considering its periodic execution nature. By default this process does nothing and needs to be configured in st2.conf settings (via values.yaml). Purging stale data can significantly improve cluster abilities to perform faster and so it’s recommended to configure st2garbagecollector in production.


StackStorm ChatOps service, based on hubot engine, custom stackstorm integration module and preinstalled list of chat adapters. Due to Hubot limitation, st2chatops doesn’t provide mechanisms to guarantee high availability and so only single 1 node of st2chatops is deployed. This service is disabled by default. Please refer to Helm values.yaml about how to enable and configure st2chatops with ENV vars for your preferred chat service.

MongoDB HA ReplicaSet

StackStorm works with MongoDB as a database engine. External Helm Chart is used to configure MongoDB HA ReplicaSet. By default 3 nodes (1 primary and 2 secondaries) of MongoDB are deployed via K8s StatefulSet. For more advanced MongoDB configuration, refer to official mongodb-replicaset Helm chart settings, which might be fine-tuned via values.yaml.

RabbitMQ HA Cluster

RabbitMQ is a message bus StackStorm relies on for inter-process communication and load distribution. External Helm Chart is used to deploy RabbitMQ cluster in Highly Available mode. By default 3 nodes of RabbitMQ are deployed via K8s StatefulSet. For more advanced RabbitMQ configuration, please refer to official rabbitmq-ha Helm chart repository, - all settings could be overridden via values.yaml.


StackStorm employs redis as a distributed coordination backend, required for st2 cluster components to work properly in an HA scenario. 3 node cluster with Sentinel is deployed via external official Helm chart dependency bitnami/redis. As any other Helm dependency, it’s possible to further configure it for specific scaling needs via values.yaml.

Feedback Needed!

As this deployment method new and beta is in progress, we ask you to try it and provide your feedback via bug reports, ideas, feature or pull requests in StackStorm/stackstorm-ha, and ecourage discussions in Slack #docker channel or write us an email.