Deploying the Kubernetes Metrics Server
Before you can deploy the WebInspect script engine (WISE) Docker container, you must deploy the Kubernetes Metrics Server to handle horizontal auto scaling for the Kubernetes WISE pods. The Metrics Server measures resource allocations, such as CPU and RAM, for nodes and pods. It provides information for Kubernetes Wise pods horizontal auto scaling to increase the number of pods during loading or decrease the number to the wise.replicas.min
setting when there is no loading. For more information, see Understanding the Parameters for WISE Deployment.
Before You Begin
Ensure that you have downloaded and configured the prerequisite software. For more information, see Downloading kubectl and Helm.
Deploying the Metrics Server
For Azure Kubernetes, the Metrics Server is installed by default. For local Kubernetes cluster installation, however, this component may need to be installed manually.
To deploy the Metrics Server to Kubernetes:
-
On the machine where the kubectl client and Helm are installed, enter the following in PowerShell:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/ releases/latest/download/components.yaml
Confirming the Metrics Server Installation
To confirm that the Metrics Server exists and is working:
-
Enter the following in PowerShell:
kubectl top nodes
You should see a response similar to the following:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% rbp-main 141m 3% 1550Mi 19% rbp-node1 45m 0% 1476Mi 9% rbp-node2 47m 0% 1519Mi 9%
-
Enter the following:
kubectl top po
You should see a response similar to the following:
NAME CPU(cores) MEMORY(bytes) wise-cluster-deployment-7747bb68b5-7q8m7 2m 96Mi wise-cluster-deployment-7747bb68b5-wl79m 2m 99Mi
Note: You can use the kubectl top po
command to return the CPU and memory metrics for the WISE pods after WISE has been installed as described in Deploying WISE in Kubernetes.