Google Kubernetes Engine Cluster
Upon the successful completion of this procedure, you will have a properly configured GKE
Cluster, where the container images can be deployed in order to obtain the desired ArcSight Capabilities (such as Transformation Hub).
Note: Provisioning your GKE cluster for ArcSight can be a challenging task given all the options and configurations that need to be considered. Check the Google Cloud documentation for:
gcloud container clusters create
as this command has a lot of different options.
dataplane-v2
option must be DISABLED
(default setting). If set to enable, the cluster's NodePort
will not work, see Limitations. Provision a GKE Cluster with Google Cloud commands
node-locations
parameter.The command below is provided as guidance, with the mandatory settings listed and/or given values. The rest of the variables (values indicated between angle brackets) will need to be replaced with the values corresponding to your deployment before executing the command.
gcloud container clusters create "<CLUSTER_NAME>" \ --project "<PROJECT_ID>" \ --zone "<REGION>" \ --no-enable-basic-auth \ --cluster-version "<GKE_VERSION>" \ --release-channel "None" \ --machine-type "<VM_TYPE>" \ --image-type "UBUNTU_CONTAINERD" \ --disk-type "pd-balanced" \ --disk-size "100" \ --node-labels Worker=label,role=loadbalancer,node.type=worker,<NODE_LABELS> \ --metadata disable-legacy-endpoints=true \ --service-account "<SERVICE_ACCOUNT>" \ --num-nodes "1" \ --logging=NONE \ --monitoring=NONE \ --enable-private-nodes \ --enable-private-endpoint \ --master-ipv4-cidr "<MASTER_CIDR_RANGE>" \ --enable-master-global-access \ --enable-ip-alias \ --network "<VPC_NAME>" \ --subnetwork "<PRIVATE_SUBNET>" \ --no-enable-intra-node-visibility \ --default-max-pods-per-node "110" \ --enable-master-authorized-networks \ --master-authorized-networks <MANAGEMENT_SUBNET_CIDR> \ --addons HorizontalPodAutoscaling,HttpLoadBalancing,NodeLocalDNS,GcePersistentDiskCsiDriver \ --no-enable-autoupgrade \ --no-enable-autorepair \ --max-surge-upgrade 1 \ --max-unavailable-upgrade 0 \ --no-enable-managed-prometheus \ --enable-shielded-nodes \ --enable-l4-ilb-subsetting \ --node-locations "<ZONE_1>","<ZONE_2>","<ZONE_3>"
Where:
<CLUSTER_NAME>
is the value decided upon during the deployment planning meeting (check the Google Cloud worksheet)
<PROJECT_ID>
is the Google Cloud project ID to use for this invocation (check the Google Cloud worksheet)
<REGION>
is the cluster compute region (check the Google Cloud worksheet)
<GKE_VERSION>
is the Google Kubernetes Engine (GKE) current version. To obtain a list, run the following command and select the latest supported available version:
gcloud container get-server-config --flatten="channels" --filter="channels.channel=STABLE" \ --format="yaml(channels.channel,channels.validVersions)"
<VM_TYPE>
is the type of machine to use for nodes. The default is e2-medium
. The list of predefined machine types is available using the following command:
gcloud compute machine-types list
<NODE_LABELS>
are the worker nodes labels, see Understanding Labels and Pods
<SERVICE_ACCOUNT>
is the Google Cloud service account to be used by the node VMs (check the Google Cloud worksheet)
<MASTER_CIDR_RANGE>
is the IPv4 CIDR range to use for the master network. This should have a /28 netmask size. Add this value to the Google Cloud worksheet.
<VPC_NAME>
is the VPC created for this deployment (check the Google Cloud worksheet)
<PRIVATE_SUBNET>
is the subnet created for this deployment (check the Google Cloud worksheet)
<MANAGEMENT_SUBNET_CIDR>
is the Management subnet CIDR created for this deployment (check the Google Cloud worksheet)
<ZONE_X>
: is the cluster compute zone (for example, us-central1-a
). The value set here overrides the default compute zone property value for this command invocation.
--no-enable-autoupgrade
: disables the autoupgrade
feature, since an upgrade procedure needs to be followed in order to prevent data loss when replacing the worker nodes.
--enable-l4-ilb-subsetting
: enables the creation of the internal load balancer. If not set, the load balancer will not be created properly.
Example command:
gcloud beta container \ --project "security-arcsight-nonprod" clusters create "gcp-arcsight-test-gke" \ --region "us-central1" \ --no-enable-basic-auth \ --cluster-version "1.26.5-gke.2700" \ --release-channel "None" \ --machine-type "n2-standard-8" \ --image-type "UBUNTU_CONTAINERD" \ --disk-type "pd-balanced" \ --disk-size "100" \ --node-labels fusion=yes,zk=yes,role=loadbalancer,intelligence-datanode=yes,node.type=worker,kafka=yes,th-platform=yes,Worker=label,th-processing=yes,intelligence-spark=yes \ --metadata disable-legacy-endpoints=true \ --service-account "gcp-arcsight-test-sa@security-arcsight-nonprod.iam.gserviceaccount.com" \ --max-pods-per-node "110" \ --num-nodes "1" \ --logging=SYSTEM,WORKLOAD \ --monitoring=SYSTEM \ --enable-private-nodes \ --enable-private-endpoint \ --master-ipv4-cidr "192.168.16.0/28" \ --enable-ip-alias \ --network "projects/security-arcsight-nonprod/global/networks/gcp-arcsight-test-vpc" \ --subnetwork "projects/security-arcsight-nonprod/regions/us-central1/subnetworks/private-subnet" \ --cluster-ipv4-cidr "192.168.0.0/21" \ --services-ipv4-cidr "192.168.8.0/21" \ --no-enable-intra-node-visibility \ --default-max-pods-per-node "110" \ --security-posture=standard \ --workload-vulnerability-scanning=disabled \ --enable-master-authorized-networks \ --master-authorized-networks 10.49.0.0/24 \ --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \ --no-enable-autoupgrade \ --no-enable-autorepair \ --max-surge-upgrade 1 \ --max-unavailable-upgrade 0 \ --no-enable-managed-prometheus \ --enable-shielded-nodes \ --enable-l4-ilb-subsetting \ --node-locations "us-central1-a","us-central1-b","us-central1-c"
The previous command creates a cluster with the following characteristics:
-
A Standard cluster mode
-
A
1.26.5-gke.1400
cluster version -
The cluster's control plane and nodes are located the
us-central1
region -
This cluster doesn't have a public IP address, so it can only be accessed from CIDR ranges configured on the master-authorized-networks (in this case, the VMs in the Management CIDR Range)
-
This cluster uses a
n2-standard-16
type of machine, designed for a Medium Workload in a setup that doesn't include the Intelligence capability. Refer to the Technical Requirements for ArcSight Platform 23.3 for node VM sizing information. -
The
master-ipv4-cidr
is10.2.0.0/28
. This value must be verified to make sure it doesn't collide with another subnet -
At least one node is being configured on each zone (
"us-central1-a"
,"us-central1-b"
,"us-central1-c"
) for the default node pool -
The
image-type
of the default node pool is"UBUNTU_CONTAINERD"
(this is a mandatory value for the deployment) -
Uses the
--node-labels
option to add the labels for all the ArcSight Suite Capabilities, except theintelligence-namenode
, as this label can only be assigned to 1 node
GKE
nodes would be possible by using the SSH
keys from the cloud shell. If you require to access the worker nodes via SSH
, use this information. GKE
firewall rule
The recently created GKE
requires a firewall rule (see Establishing Firewall Rules) to allow communication on all ports between the private network
and the GKE
's internal CIDRs
.
To obtain the GKE
's internal CIDRs
run the following command:
gcloud container clusters describe <CLUSTER_NAME> --zone <REGION> | grep -e clusterIpv4CidrBlock -e servicesIpv4CidrBlock -e masterIpv4CidrBlock
Where:
<CLUSTER_NAME>
is the value decided upon during the deployment planning meeting (check the Google Cloudworksheet)
<REGION>
is the cluster compute region (check the Google Cloudworksheet)
Example command and output:
gcloud container clusters describe th-infra-gke --zone us-central1-a | grep -e clusterIpv4CidrBlock -e servicesIpv4CidrBlock -e masterIpv4CidrBlock
clusterIpv4CidrBlock: 172.16.0.0/20 servicesIpv4CidrBlock: 172.16.16.0/22 masterIpv4CidrBlock: 172.16.20.0/28