Turbine Platform Installer Gui...
Existing Cluster Installation
System Requirements for an Existing Cluster Install
15 min
here are the system requirements for installing turbine on an existing kubernetes cluster a kubernetes 1 29 14, 1 30 10, 1 31 6, 1 32 3 compliant cluster a persistent and reliable storageclass to use for the mongodb and rabbitmq pods persistent volumes the required filesystem for the mongodb storageclass is xfs 3+ node cluster for production environments to provide redundancy and high availability (ha) ensure all nodes are in the same cloud provider region or physical data center network nodes behind different wan links in the same cluster are not supported cpu architecture must be compatible with mongodb, unless an external mongo solution is used see the mongodb platform support matrix https //www mongodb com/docs/v5 0/administration/production notes/#platform support matrix for more information production environments should have snapshots set up and stored externally from the cluster to ensure that they can be retrieved in the event of a total cluster loss see backup and restore on an existing cluster with snapshots docid\ nqvm65bghsoyrhs ydtkx for more information on how to set up snapshots and compatible providers a load balancer or ingress controller to load balance to the turbine platform ui must be configured for tls communication to the backend if your load balancer or ingress controller requires explicit trust with the certificate presented by the backend service then you must upload a trusted certificate for the swimlame web backend the turbine platform ui must be accessed externally via port 443 accessing turbine over any other port is not supported must support websockets velero 1 9 0 installed in the cluster to handle backup and restores with snapshots see backup and restore on an existing cluster with snapshots docid\ nqvm65bghsoyrhs ydtkx for instructions to install velero into your cluster limitations changing annotations, labels, resources, node selector, tolerations, or affinity settings for the turbine platform installer pods is not currently supported these settings can, however, be set for the turbine platform pods on the config page the storageclass for the turbine platform installer pods is required to be default and cannot currently be changed multiple tpi installs into the same cluster is not currently supported backup requirements taking a snapshot requires enough free disk space for a compressed archive of the swimlane database to be saved in ephemeral storage before it is uploaded to the snapshot destination free disk space on the cluster at /var/lib/kubelet should be greater than or equal to the size of the uncompressed database to ensure there is no disk pressure during the snapshot process additional offline install requirements in addition to the requirements above, installs into an offline (airgapped) kubernetes cluster have the following additional requirements a private docker image registry accessible by the kubernetes cluster and jumpbox is required for installation and ongoing operation the registry needs to be trusted by your kubernetes cluster in order to be be able to pull images see the kubernetes documentation https //kubernetes io/docs/tasks/configure pod container/pull image private registry/ for more information a jumpbox that has access to the internet, access to the airgapped kubernetes cluster, and a browser from which you can access the turbine platform installer critical prerequisites for airgapped network installations for access to these optional services, ensure you have these things within the airgapped network ldap login functionality requires an ldap server inside of the airgapped subnet, or access to outside subnets or the ip or domain where the ldap server resides in order to open these ports, non secure ldap uses port 389, but ldaps (secure ldap) uses port 636 and is preferred sso login functionality requires the service to be able to reach inside the airgapped network (where the turbine instance resides) email functionality requires the turbine instance to use a functioning mail proxy that resides within the airgapped subnet, access to outside subnets where an email server resides, or to your chosen email server on the internet in order to open these ports, non secure smtp uses port 25, but the secure smtp uses port 587 and is preferred in order to provide threat intelligence enrichment, the soc solution requires access to the following threat intelligence urls virustotal https //virustotal com https //virustotal com/ and subdomains url haus urlhaus malware url exchange https //urlhaus abuse ch/ and subdomains recorded future recorded future securing our world with intelligence http //recordedfuture com/ and subdomains ipqualityscore fraud prevention | bot detection | bot protection | prevent fraud with ipqs http //ipqualityscore com/ and subdomains each of these requires tcp port 443 access and tcp port 80 access cluster resources your cluster needs the following resources available to meet the usage thresholds for the given deployment sizes components single node (dev/test) small medium large cpu 8 cpu cores 24 cpu cores 48 cpu cores 96 cpu cores memory 32 gb ram 96 gb ram 192 gb ram 384 gb ram storage 600 gb ssd / 3000 iops per mongodb pod persistent volume 600 gb ssd / 3000 iops per mongodb pod persistent volume 1 tb ssd / 3000 iops per mongodb pod persistent volume 1 tb ssd / 3000 iops per mongodb pod persistent volume record creation boundaries + active users records created in a day 250,000total records 5 millionactive users 10 records created in a day 500,000total records 20 millionactive users 30 records created in a day 1 milliontotal records 20 millionactive users 50 records created in a day 1 milliontotal records 20 millionactive users 200 integration calculations integrations in use < 20 averageintegration actions/day < 250,000 integrations in use < 20 averageintegration actions/day < 500,000 integrations in use < 20 averageintegration actions/day < 1 million integrations in use > 20 averageintegration actions/day < 1 million pods api 1tasks 1web 1reports 1mongodb 1 api 3tasks 3web 3reports 3mongodb 3 api 3tasks 3web 3reports 3mongodb 3 api 6tasks 9web 3reports 3mongodb 3 external mongodb resource recommendations the following table illustrates the resource recommendations (per node) for a standalone mongo deployment all of these values can be subtracted from the cluster resources above when allocating resources for the remainder of the turbine pods for more information about deploying on an external mongodb cluster, see deploy with an external mongodb cluster docid\ jjipz1a6vwbfcscofq rd components single node small medium large cpu 4 cpu cores 4 cpu cores 8 cpu cores 8 cpu cores ram 16 gb ram 16 gb ram 16 gb ram 32 gb ram storage 300 gb ssd / 3000 iops per mongodb pod persistent volume 300 gb ssd / 3000 iops per mongodb pod persistent volume 700 gb ssd / 3000 iops per mongodb pod persistent volume 700 gb ssd / 3000 iops per mongodb pod persistent volume remaining turbine cluster resources this table illustrates the resources necessary for the remainder of turbine if you are using external mongodb resources components single node small medium large cpu 4 cpu cores 12 cpu cores 24 cpu cores 72 cpu cores ram 16 gb ram 48 gb ram 144 gb ram 288 gb ram storage 300 gb ssd / 3000 iops per node 300 gb ssd / 3000 iops per node 300 gb ssd / 3000 iops per node 300 gb ssd / 3000 iops per node optional pods if you require the use of these optional pods you will need the following additional resources available pod small medium large turbine logger 1 cpu core1 gb ram 4 cpu core1 gb ram 5 cpu core2 gb ram see pod requests and limits docid\ exzkbhfyqkglgkcirjls7 for a breakdown of resources for each pod type including optional pods not included in the table above critical cpu architecture prerequisites cpu architecture must be compatible with mongodb, unless an external mongo solution is used see the https //www mongodb com/docs/v7 0/administration/production notes/#platform support matrix https //www mongodb com/docs/v7 0/administration/production notes/#platform support matrix mongodb platform support matrix https //www mongodb com/docs/v5 0/administration/production notes/#platform support matrix for more information to outline some important requirements noted in the mongodb documentation avx ( https //en wikipedia org/wiki/advanced vector extensions#cpus with avx https //en wikipedia org/wiki/advanced vector extensions#cpus with avx ) cpu instruction set is required to run mongodb numa (non uniform memory access) needs to be disabled exceptions to network access control lists (acl) swimlane installations on servers with tight network access control (nac) will need several exceptions made to properly install, license, and stage a swimlane deployment with the swimlane platform installer see the tables below for the outbound and inbound exceptions required outbound url acl exceptions exception purpose get swimlane io swimlane platform installation script k8s kurl sh swimlane platform installation script kurl sh swimlane platform installation script kurl sh s3 amazonaws com swimlane platform installation script dependencies registry replicated com swimlane platform container images proxy replicated com swimlane platform container images k8s gcr io swimlane platform container dependency images ghcr io swimlane platform container dependency images registry k8s io swimlane platform container dependency images storage googleapis com swimlane platform container dependency images quay io swimlane platform container dependency images replicated app swimlane platform installer license verification auth docker io docker authentication registry 1 docker io docker registry production cloudflare docker com docker infrastructure files pythonhosted org python packages for swimlane integrations pypi org python packages for swimlane integrations \<loadbalancerip\\> 6443 kubernetes api port requirements ports protocol purpose access from 443 tcp swimlane platform ui clients that need to access the swimlane platform ui 80 tcp swimlane platform ui optional http to https redirect for the swimlane platform ui 8800 tcp swimlane platform installer ui clients that need to access the swimlane platform installer ui 22 tcp shell access management workstations to manage the cluster nodes and install swimlane external monitoring considerations swimlane recommends that any tpi installation has some amount of external monitoring set up and set to alert when any user defined thresholds are met as to possible cases of your production instances going down different installation scenarios may call for different metrics to monitor, so your implementation may vary as a baseline the following metrics are recommended for nodes that are running turbine pods are all nodes healthy? does each node have sufficient free space in all partitions? are cpu and memory usage levels within acceptable levels? is disk latency within acceptable ranges? are any pods in an not ready state? are any deployments or statefulsets not reporting the correct number of ready replicas? do any load balancers have health checks in place? are they healthy? third party monitoring solutions there are several third party monitoring solutions that you can use to monitor resource usage for a node any tool that you put into use should be installed externally to the cluster so as to not interfere with cluster operations and to be able to alert if a metric enters into a failing scenario these products may require that their own agents or exporters be installed on the nodes in order to facilitate monitoring any agent or exporter should be tested against your cluster to validate that they do not interfere with turbine operations