Turbine Platform Installer Gui...
...
Infrastructure Examples
GCP TCP Load Balancer
6 min
this topic explains how to use an gcp tcp load balancer (layer 4) for your turbine deployment architecture diagram create an unmanaged instance group to create an unmanaged instance group https //cloud google com/compute/docs/instance groups?hl=en set a name for the instance group set the region and the zone to where the turbine vm instances live set the network to where the turbine vm instances live add your first swimlane primary in the vm instances drop down create a tcp load balancer to create a tcp load balancer https //cloud google com/load balancing/docs/network for internet facing or internal only set from internet to my vms for multiple regions or single region set single region only for backend type set backend service create a backend configuration set the region to where the turbine instances live select your instance group that contains the first turbine tpi vm under the health check drop down select an existing health check or create another health check set a name for the health check set the protocol to tcp set the port to 6443 unless needed per your organization's requirements, leave the defaults for the remaining settings create a frontend configuration set the network service tier per your requirements create a new reserved ip address or use an existing one set ports to multiple set the port to 6443,8800,443,80 create the load balancer after turbine has been installed on the additional nodes they need to be added to this target group notes after the initial install is done, you will need to join any additional primaries to the turbine cluster before adding them to the instance group in use by your load balancer to ensure the join script runs successfully if you want a multi zonal deployment, you can create additional unmanaged instance groups and put your other turbine primaries in them adding backend configuration for nodeport services due to gcp's workaround for hairpinning, traffic may blackhole when attempting to access nodeports through the load balancer this is because gcp automatically routes traffic destined for the load balancer to the loopback address of the vm the request was forwarded to, and kube proxy does not listen on localhost to workaround this and successfully access nodeports through the load balancer, you will need to create an alias for the primary network interface that resolves to the load balancer's ip address e g , ifconfig eth0 0 \<lb ip> netmask 255 255 255 255 up on each node in the turbine cluster to persist these changes you will need to add them to your network interfaces configuration file firewall rules for gcp load balancers, ingress port access is defined in the firewall section of your gcp project's vpc network for more information about the port requirements see system requirements for an embedded cluster install docid 9lxricxlm1t14ydlkt4zr turbine configuration be sure to enable the enable the ingress controller option on the turbine platform installer ui config tab