Google Kubernetes Engine Cluster

To determine the Kubernetes version to use when deploying the ArcSight Platform to Google Cloud, check the Hybrid Cloud Support page of the Technical Requirements for ArcSight Platform 23.3.

Upon the successful completion of this procedure, you will have a properly configured GKE Cluster, where the container images can be deployed in order to obtain the desired ArcSight Capabilities (such as Transformation Hub).

Note: Provisioning your GKE cluster for ArcSight can be a challenging task given all the options and configurations that need to be considered. Check the Google Cloud documentation for:

gcloud container clusters create

as this command has a lot of different options.

The dataplane-v2 option must be DISABLED (default setting). If set to enable, the cluster's NodePort will not work, see Limitations.

Provision a GKE Cluster with Google Cloud commands

If your deployment requires enabling the pod logs, check Configuring Cloud Operations for GKE for information on how to enable the available logs .
If you are deploying the Intelligence capability, ensure that all the nodes are deployed in the same zone by updating the node-locations parameter.

The command below is provided as guidance, with the mandatory settings listed and/or given values. The rest of the variables (values indicated between angle brackets) will need to be replaced with the values corresponding to your deployment before executing the command.

gcloud container clusters create "<CLUSTER_NAME>" \
--project "<PROJECT_ID>"  \
--zone "<REGION>" \
--no-enable-basic-auth \
--cluster-version "<GKE_VERSION>" \
--release-channel "None" \
--machine-type "<VM_TYPE>" \
--image-type "UBUNTU_CONTAINERD" \
--disk-type "pd-balanced" \
--disk-size "100" \
--node-labels Worker=label,role=loadbalancer,node.type=worker,<NODE_LABELS> \
--metadata disable-legacy-endpoints=true \
--service-account "<SERVICE_ACCOUNT>" \
--num-nodes "1" \
--logging=NONE \
--monitoring=NONE \
--enable-private-nodes \
--enable-private-endpoint \
--master-ipv4-cidr "<MASTER_CIDR_RANGE>" \
--enable-master-global-access \
--enable-ip-alias \
--network "<VPC_NAME>" \
--subnetwork "<PRIVATE_SUBNET>" \
--no-enable-intra-node-visibility \
--default-max-pods-per-node "110" \
--enable-master-authorized-networks \
--master-authorized-networks <MANAGEMENT_SUBNET_CIDR> \
--addons HorizontalPodAutoscaling,HttpLoadBalancing,NodeLocalDNS,GcePersistentDiskCsiDriver \
--no-enable-autoupgrade \
--no-enable-autorepair \
--max-surge-upgrade 1 \
--max-unavailable-upgrade 0 \
--no-enable-managed-prometheus \
--enable-shielded-nodes \
--enable-l4-ilb-subsetting \
--node-locations "<ZONE_1>","<ZONE_2>","<ZONE_3>"

Where:

<CLUSTER_NAME> is the value decided upon during the deployment planning meeting (check the Google Cloud worksheet)

<PROJECT_ID> is the Google Cloud project ID to use for this invocation (check the Google Cloud worksheet)

<REGION> is the cluster compute region (check the Google Cloud worksheet)

<GKE_VERSION> is the Google Kubernetes Engine (GKE) current version. To obtain a list, run the following command and select the latest supported available version:

gcloud container get-server-config --flatten="channels" --filter="channels.channel=STABLE" \
--format="yaml(channels.channel,channels.validVersions)"

<VM_TYPE> is the type of machine to use for nodes. The default is e2-medium. The list of predefined machine types is available using the following command:

gcloud compute machine-types list

<NODE_LABELS> are the worker nodes labels, see Understanding Labels and Pods

<SERVICE_ACCOUNT> is the Google Cloud service account to be used by the node VMs (check the Google Cloud worksheet)

<MASTER_CIDR_RANGE> is the IPv4 CIDR range to use for the master network. This should have a /28 netmask size. Add this value to the Google Cloud worksheet.

<VPC_NAME> is the VPC created for this deployment (check the Google Cloud worksheet)

<PRIVATE_SUBNET> is the subnet created for this deployment (check the Google Cloud worksheet)

<MANAGEMENT_SUBNET_CIDR> is the Management subnet CIDR created for this deployment (check the Google Cloud worksheet)

<ZONE_X>: is the cluster compute zone (for example, us-central1-a). The value set here overrides the default compute zone property value for this command invocation.

--no-enable-autoupgrade: disables the autoupgrade feature, since an upgrade procedure needs to be followed in order to prevent data loss when replacing the worker nodes.

--enable-l4-ilb-subsetting: enables the creation of the internal load balancer. If not set, the load balancer will not be created properly.

Example command:

gcloud beta container \
--project "security-arcsight-nonprod" clusters create "gcp-arcsight-test-gke" \
--region "us-central1" \
--no-enable-basic-auth \
--cluster-version "1.26.5-gke.2700" \
--release-channel "None" \
--machine-type "n2-standard-8" \
--image-type "UBUNTU_CONTAINERD" \
--disk-type "pd-balanced" \
--disk-size "100" \
--node-labels fusion=yes,zk=yes,role=loadbalancer,intelligence-datanode=yes,node.type=worker,kafka=yes,th-platform=yes,Worker=label,th-processing=yes,intelligence-spark=yes \
--metadata disable-legacy-endpoints=true \
--service-account "gcp-arcsight-test-sa@security-arcsight-nonprod.iam.gserviceaccount.com" \
--max-pods-per-node "110" \
--num-nodes "1" \
--logging=SYSTEM,WORKLOAD \
--monitoring=SYSTEM \
--enable-private-nodes \
--enable-private-endpoint \
--master-ipv4-cidr "192.168.16.0/28" \
--enable-ip-alias \
--network "projects/security-arcsight-nonprod/global/networks/gcp-arcsight-test-vpc" \
--subnetwork "projects/security-arcsight-nonprod/regions/us-central1/subnetworks/private-subnet" \
--cluster-ipv4-cidr "192.168.0.0/21" \
--services-ipv4-cidr "192.168.8.0/21" \
--no-enable-intra-node-visibility \
--default-max-pods-per-node "110" \
--security-posture=standard \
--workload-vulnerability-scanning=disabled \
--enable-master-authorized-networks \
--master-authorized-networks 10.49.0.0/24 \
--addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
--no-enable-autoupgrade \
--no-enable-autorepair \
--max-surge-upgrade 1 \
--max-unavailable-upgrade 0 \
--no-enable-managed-prometheus \
--enable-shielded-nodes \
--enable-l4-ilb-subsetting \
--node-locations "us-central1-a","us-central1-b","us-central1-c"

The previous command creates a cluster with the following characteristics:

Remember to note down all incumbent configuration values in your Google Cloud worksheet
In case the bastion gets corrupted or goes down, connecting to the GKE nodes would be possible by using the SSH keys from the cloud shell. If you require to access the worker nodes via SSH, use this information.

GKE firewall rule

The recently created GKE requires a firewall rule (see Establishing Firewall Rules) to allow communication on all ports between the private network and the GKE's internal CIDRs.

To obtain the GKE's internal CIDRs run the following command:

gcloud container clusters describe <CLUSTER_NAME> --zone <REGION> | grep -e clusterIpv4CidrBlock -e servicesIpv4CidrBlock -e masterIpv4CidrBlock

Where:

<CLUSTER_NAME> is the value decided upon during the deployment planning meeting (check the Google Cloudworksheet)

<REGION> is the cluster compute region (check the Google Cloudworksheet)

Example command and output:

gcloud container clusters describe th-infra-gke --zone us-central1-a | grep -e clusterIpv4CidrBlock -e servicesIpv4CidrBlock -e masterIpv4CidrBlock
clusterIpv4CidrBlock: 172.16.0.0/20
servicesIpv4CidrBlock: 172.16.16.0/22
masterIpv4CidrBlock: 172.16.20.0/28