Swimlane Platform Installer Gu...
Embedded Cluster Installation
System Requirements for an Embedded Cluster Install
30 min
while the swimlane platform can be installed on a single node for testing, a 3+ node cluster is recommended for production environments to provide redundancy and high availability (ha) any multiple node cluster must have an odd number of total nodes this table details the recommended sizing and data per node components single node 3 node cluster small 3 node cluster medium 3 node cluster large cpu 8 cpu cores 8 cpu cores 16 cpu cores 32 cpu cores memory 32 gb ram 32 gb ram 64 gb ram 128 gb ram storage 600 gb ssd / 3000 iops per node 600 gb ssd / 3000 iops per node 1 tb ssd / 3000 iops per node 1 tb ssd / 3000 iops per node record creation boundaries + active users records created in a day 250,000 total records 5 million active users 10 records created in a day 500,000 total records 20 million active users 30 records created in a day 1 million total records 20 million active users 50 records created in a day 1 million total records 20 million active users 200 integration calculations integrations in use < 20 average integration actions/day < 250,000 integrations in use < 20 average integration actions/day < 500,000 integrations in use < 20 average integration actions/day < 1 million integrations in use > 20 average integration actions/day < 1 million pods api 1 tasks 1 web 1 mongodb 1 reports 1 api 3 tasks 3 web 3 mongodb 3 reports 3 api 3 tasks 3 web 3 mongodb 3 reports 3 api 6 tasks 9 web 3 mongodb 3 reports 3 swimlane does not support spinning disks external mongodb resource recommendations this table illustrates the resource recommendations (per node) for a standalone mongo deployment all of these values can be subtracted from the system requirements above when allocating resources for the remainder of the swimlane pods for more information about deploying on an external mongodb cluster, see deploy with an external mongodb cluster docid\ aifrfd6ca k0esuklglgo components single node 3 node cluster sm 3 node cluster med 3 node cluster lg cpu 4 cpu cores 4 cpu cores 8 cpu cores 8 cpu cores ram 16 gb ram 16 gb ram 16 gb ram 32 gb ram storage 300 gb ssd / 3000 iops per node 300 gb ssd / 3000 iops per node 700 gb ssd / 3000 iops per node 700 gb ssd / 3000 iops per node remaining swimlane cluster resources this table illustrates the resources necessary for the remainder of swimlane if you are using external mongodb resources components single node 3 node cluster sm 3 node cluster med 3 node cluster lg cpu 4 cpu cores 4 cpu cores 8 cpu cores 24 cpu cores ram 16 gb ram 16 gb ram 48 gb ram 96 gb ram storage 300 gb ssd / 3000 iops per node 300 gb ssd / 3000 iops per node 300 gb ssd / 3000 iops per node 300 gb ssd / 3000 iops per node resource utilization thresholds all nodes need to stay below certain resource utilization thresholds to ensure that pods always have available resources to operate in if any of the following thresholds are exceeded on a node, all pods on that node will be removed until the resource utilization is addressed resource threshold memory less than 100 (mib) available disk space (/var/lib/containerd partition) less than 15% available disk space (/var/lib/kubelet partition) less than 10% available disk inodes (/var/lib/kubelet partition) less than 5% available backup requirements taking a snapshot requires enough free disk space for a compressed archive of the swimlane database to be saved in ephemeral storage before it is uploaded to the snapshot destination free disk space on the cluster at /var/lib/kubelet should be greater than or equal to the size of the uncompressed database to ensure there is no disk pressure during the snapshot process cloud service sizing tools sizing calculators sizing guides instance sizing calculators aws instance sizing aws instance sizing calculators azure instance sizing azure instance sizing calculators gcp instance sizing gcp most cloud providers limit iops for disks and for instances/virtual machines consult your provider's documentation to ensure the effective iops for the cluster nodes meet the requirements in the sizing table above provider documentation aws disk performance limits aws instance performance limits azure disk performance limits azure vm performance limits gcp disk performance limits gcp vm performance limits critical prerequisites the following operating systems have been tested by swimlane for your node setup rhel 8 8 6, 8 9, 8 10 rhel 9 9 2, 9 4 rocky 9 9 2 9 4 ubuntu ubuntu 20 04 6 lts, 22 04 2 lts, 24 04 lts amazon linux amazon linux 2 prerequisites for airgap installation on rocky linux or rhel 9 x before installation arigap, make sure to install the following nfs utils conntrack tools socat git fio use the following command to install sudo dnf y install nfs utils conntrack tools socat git fio container selinux tar zip unzip for rocky linux 9 2, run the following command to change the file permissions sudo chmod 755 /etc/rc d/rc local for each node, whether you are setting up a single node or ha installation, ensure you have sudo/root access numa (non uniform memory access) disabled accurate system time to maintain accurate system time you must have network time protocol (ntp) or a similar time sync service static ips (dynamic ips are not supported) static node hostnames (node hostnames cannot change) no previous installs of kubernetes, docker, or containerd disable any automatic updates for kubernetes and docker related packages updating these will be handled through the swimlane platform installer ipv6 and dual stack networks are not supported ip forwarding enabled the installer can handle enabling ipv4 forwarding but you must ensure it remains enabled on all cluster nodes going forward even after reboot critical cpu architecture prerequisites cpu architecture must be compatible with mongodb, unless an external mongo solution is used see the mongodb platform support matrix https //www mongodb com/docs/v5 0/administration/production notes/#platform support matrix for more information critical network prerequisites the ip ranges 10 32 0 0/20 and 10 96 0 0/22 are the default ip ranges used internally in the cluster for kubernetes service and pod networking if these are in use in your network and routable by the cluster nodes, the internal cluster ip ranges will need to be overridden see define custom pod and service subnets docid\ bar6ci0g3tmmrygtoawuy for instructions on overriding the internal cluster ip ranges ensure all nodes are in the same cloud provider region or physical data center network nodes behind different wan links in the same cluster are not supported critical load balancer prerequisites a load balancer that supports hairpinning is required for ha installations here are some suggested load balancers and configurations that you can use layer 7 load balancers aws application load balancer docid\ l9fkln0xlmxy1kge5cy azure standard application gateway docid\ p6jjjpfiq3ibh54kpshlw azure standard v2 application gateway docid\ cpve79gdpfp9vo5rqcmzn gcp https load balancer docid\ a2c2at3wflufvsff2z0gf haproxy load balancer docid\ pm18g7aay0ok6 wrpcvpf tcp forwarding layer 4 load balancers aws network load balancer docid\ ri taaxopn8jf0ybxnonv azure load balancer docid\ dit6dvwn fmf qyzwwnss gcp tcp load balancer docid 1cmqblxkzawnluenkrj7w haproxy load balancer docid\ pm18g7aay0ok6 wrpcvpf extra considerations for load balancer set up on nodes each node in the cluster needs to be reachable by the load balancer on these ports tcp 443 (swimlane platform ui) tcp 8800 (swimlane platform installer ui) tcp 6443 (kubernetes api) tcp 80 (optional http to https redirect for the swimlane platform ui) a load balancer is required for the kubernetes api and the swimlane platform installer if you are using a layer 4 load balancer for the swimlane platform installer and the swimlane platform, it can be combined with the kubernetes api load balancer so that only one is required the kubernetes api load balancer must be a layer 4 load balancer the front end port on the load balancer should be 6443 (kubernetes control plane) the back end port on the load balancer to the nodes should be 6443 the swimlane platform installer and the swimlane platform load balancer can be either a layer 4 or layer 7 load balancer front end ports on the load balancer should be 443 (swimlane platform) and 8800 (swimlane platform installer) backend ports on the load balancer map different based on load balancer type for layer 7 load balancers 443 should map to 4443 8800 should map to 8800 for layer 4 load balancers 443 should map to 443 8800 should map to 8800 other critical prerequisites for production environments use an amazon s3 bucket or s3 compatible storage solution to store snapshots externally so that a total cluster outage does not result in a loss of snapshots if amazon s3 is unavailable for use then an s3 compatible solution like min io https //min io/ can be used but should be installed external to the cluster in order to maximize disk space, swimlane recommends that you set up partitions here are the minimum recommended partition sizes partition size description of highest storage consumers by function storage concerns and what to look out for / 50 gb base os, installation files, logs, and other smaller cluster dependencies local logs storage and log rotation policies /var/lib/kurl 100 gb dependencies for the kurl kubernetes distribution written and read from during kurl installs and upgrades /var/lib/containerd 100 gb container images, logs, and runtime volumes docker image growth (unused images get pruned when the default imagefs threshold of 15% is reached) and container log growth /var/lib/kubelet 100 gb kubernetes runtime, ephemeral storage, and in flight snapshot temporary storage snapshot size and pod temporary storage increasing /var/openebs 300 gb swimlane platform installer database, and completed local snapshots persistent storage subsystem used for database volumes amount of swimlane data and integrations local storage of snapshots (see note, below) if you are using a layer 4 load balancer for the swimlane platform installer and the swimlane platform, it can be combined with the kubernetes api load balancer so that only one is required if you plan to use a separate disk or volume for /var/openebs ensure it meets the minimum iops defined in the table with recommended data and sizing at the beginning of this topic domain name system (dns) use the ip or dns record for the kubernetes api load balancer to specify the load balancer address during the initial installation a dns record should be created that points to the swimlane platform installer and swimlane platform load balancer this will be the dns address specified when you configure swimlane after the installation append that address with 8800 in order to access the swimlane platform installer ui if only one layer 4 load balancer is used then a single dns record can be created and used for both purposes you must use dns compliant records a dns record can be up to 63 characters long and can only contain letters, numbers, and hyphens the record cannot start or end with a hyphen, nor have consecutive hyphens exceptions to network access control lists (acl) swimlane installations on servers with tight network access control (nac) will need several exceptions made to properly install, license, and stage a swimlane deployment with the swimlane platform installer see the tables below for the outbound and inbound exceptions required outbound url acl exceptions exception purpose get swimlane io swimlane platform installation script k8s kurl sh swimlane platform installation script kurl sh swimlane platform installation script kurl sh s3 amazonaws com swimlane platform installation script dependencies registry replicated com swimlane platform container images proxy replicated com swimlane platform container images k8s gcr io swimlane platform container dependency images ghcr io swimlane platform container dependency images registry k8s io swimlane platform container dependency images storage googleapis com swimlane platform container dependency images quay io swimlane platform container dependency images replicated app swimlane platform installer license verification auth docker io docker authentication registry 1 docker io docker registry production cloudflare docker com docker infrastructure files pythonhosted org python packages for swimlane integrations pypi org python packages for swimlane integrations \<loadbalancerip> 6443 kubernetes api port requirements external access the following ports are required externally to allow access to the swimlane platform components ports protocol purpose access from 443 tcp swimlane platform ui clients that need to access the swimlane platform ui 80 tcp swimlane platform ui optional http to https redirect for the swimlane platform ui 8800 tcp swimlane platform installer ui clients that need to access the swimlane platform installer ui 22 tcp shell access management workstations to manage the cluster nodes and install swimlane between cluster nodes the following ports are required between the clusters nodes to allow cluster operation ports protocol purpose 2379 2380 tcp kubernetes etcd 6443 tcp kubernetes api 8472 udp kubernetes cni 10250 10252 tcp kubernetes components (kubelet, kube scheduler, kube controller) from load balancers the following ports are required between the cluster nodes and the load balancer(s) ports protocol purpose 443 tcp swimlane platform ui 80 tcp optional http to https redirect for the swimlane platform ui 8800 tcp swimlane platform installer ui 6443 tcp kubernetes api available ports in addition to all ports listed above in the between cluster nodes table, the following ports are required to be available and unused by other processes on each cluster node in order to install ports purpose 2381 kubernetes etcd 8472 kubernetes cni 10248 kubernetes kubelet health server 10249 kubernetes kube proxy metrics server 10257 kubernetes kube controller manager health server 10259 kubernetes kube scheduler health server external monitoring considerations swimlane recommends that any spi installation has some amount of external monitoring set up and set to alert when any user defined thresholds are met as to possible instances of your production instances going down different installation scenarios may call for different metrics to monitor for, so your implementation may vary as a baseline, the following metrics are recommended are all nodes healthy? does each node have sufficient free space in all partitions? are cpu and memory usage levels within acceptable levels? is disk latency within acceptable ranges? are any pods in a not ready state? are any deployments or statefulsets not reporting the correct number of ready replicas? do any load balancers have health checks in place? are they healthy? third party monitoring solutions there are several third party monitoring solutions that you can use to monitor resource usage for a node any tool that you put into use should be installed externally to the cluster so as to not interfere with cluster operations and to be able to alert if a metric enters into a failing scenario these products may require that their own agents or exporters be installed on the nodes in order to facilitate monitoring any agent or exporter should be tested against your cluster to validate that they do not interfere with swimlane operations or port requirements