Migrating the NFS Server to a New Location
The process given here explains how to migrate your NFS server and paths to another location (including changing paths within the same NFS server). During the move, some of the exported path pods from the core namespace will incur downtime as they are scaled to zero or temporarily removed. The OMT Management Portal (and all of its features) will not be available during such downtime.
Data will be copied before being transferred, so that the original location should remain as a backup until the procedure is complete and the cluster successfully back to operation, with pods restarted with new paths and the new NFS server.
This procedure will be executed on your primary master node, with access to thekubectl
command and the contents of /opt/arcsight/kubernetes
The procedures uses the volume_admin.sh
script located in /opt/arcsight/kubernetes/scripts
Usage:./volume_admin.sh <Operation> <Persistent Volume> <Options>
Where options include:
-
reconfigure:
Reconfigure a persistent volume -
search:
Find persistent volume consumers
Preparation
- Verify that all pods are running correctly with the following command:
kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
- Verify status of OMT installation with the following command:
/opt/arcsight/kubernetes/bin/kube-status.sh
- Prepare the new NFS volumes with the same permission set as the existing volumes.
- If you are using a software-controlled NFS, make sure the export policy is configured in the correct order. For example, for NetApp NFS, the RO/RW Access rules are None, Superuser Security types are None, User ID to which anonymous users are mapped equals 1999 (or whatever value you used during initial install).
- For using NFSv4 and later versions, make sure ID mapping (configured in (
/etc/idmapd.conf
) on both the NFS server and all NFS clients (that is, your cluster nodes) uses the same domain. - Verify that UID/GID is correct by manually mounting new NFS mount points and touching a file. Permission should be the same as for touching the file on the old NFS mount points.
- Note that for any changes on the NFS Server to take effect, all mount points still pending mounting should be closed.
4. Get an overview of persistent volumes for your installation with the following command:kubectl get pv
Migration Procedures
The recommended order in which migration should be executed on your persistent volumes is as follows:
- itom-logging
- arcsight-installer-xxxxx-db-backup-vol
- itom-vol
- db-single
- arcsight-installer-xxxxx-arcsight-volume
In any of the following commands, <old_nfs_mount>
and <new_nfs_mount>
refer to manually-mounted NFS for copying or maintenance procedures, and <new_nfs_path>
refers to the real path on the NFS server of the mount point for the PV change command.
Migrate PV itom-logging
- Determine the services using the itom-logging PV by running the following command. (Note the number of replicas running for later scaleback, after the NFS migration):
/opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-logging
kubectl delete -f /<old_nfs_mount>/itom/itom_vol/suite-install/yamlContent/itom-fluentd.yaml
- Scale down other services by running these commands:
kubectl scale --replicas=0 -n core deployment/idm
kubectl scale --replicas=0 -n core deployment/itom-logrotate-deployment - Verify all pods of interest are deleted by running this command:
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
- Verify that consumers have been removed from the PV users list:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-logging
- Copy NFS data to new mount point:
cp -rfp /mnt/<old_nfs_mount>/itom/logging /mnt/<new_nfs_mount>/itom/logging
- Check the content of mount for any permissions discrepancies. The output of these commands must be identical:
ls -l /mnt/<old_nfs_mount>/itom/logging
ls -l /mnt/<new_nfs_mount>/itom/logging - Authorize the PV change by running this command:
/opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure itom-logging -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/itom/logging
- Verify the new NFS path in the configuration by running the following command:
kubectl get pv itom-logging -o yaml
- For the previous command, locate the
nfs:
section of the output. It should list the new server and volume. - Repeat all the commands you used to scale down or destroy the pods to scale all replicas up or start up related daemonsets.
- Recreate the daemonset from the YAML with these commands. (Note that this will be still old path until itom_vol PV is migrated.)
kubectl create -f /<old_nfs_mount>/itom/itom_vol/suite-install/yamlContent/itom-fluentd.yaml
kubectl scale --replicas=<value> -n core deployment/idm
kubectl scale --replicas=<value> -n core deployment/itom-logrotate-deployment - Verify that consumers have been restored with this command:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-logging
- Verify pods are all running:
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
- If all pods are running, verify OMT status:
/opt/arcsight/kubernetes/bin/kube-status.sh
Migrate PV arcsight-installer-xxxxx-db-backup-vol
- Determine the services using the arcsight-installer-xxxxx-db-backup-vol PV by running the following command. Note the number of replicas running for later scaleback, after the NFS migration:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-db-backup-vol
- Scale down the necessary deployments:
kubectl scale --replicas=0 deployment/itom-pg-backup -n arcsight-installer-xxxxx
- Verify that consumers have been removed:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-db-backup-vol
- Copy the NFS data to a new mount point:
cp -rfp /mnt/<old_nfs_mount>/itom/db_backup /mnt/<new_nfs_mount>/itom/db_backup
- Check the mount content for any permissions discrepancies. The output of these commands must be identical:
ls -l /mnt/<old_nfs_mount>/itom/logging
ls -l /mnt/<new_nfs_mount>/itom/logging - Authorize the PV change:
/opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure arcsight-installer-xxxxx-db-backup-vol -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/itom/db_backup
- Repeat all the commands you used to scale down or destroy the pods to scale all replicas up or start up related daemonsets.
kubectl scale --replicas=<value> deployment/itom-pg-backup -n arcsight-installer-xxxxx
kubectl create -f <PATH>
- Verify consumers have been restored with this command:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-db-backup-vol
- Verify pods are all running:
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
- If all pods are running, verify OMT status:
/opt/arcsight/kubernetes/bin/kube-status.sh
Migrate PV itom-vol
- Determine the services using the itom-vol PV by running the following command. Note the number of replicas running for later scaleback, after the NFS migration:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-vol
- Delete the YAML-based daemonsets by running these commands:
kubectl delete -f /<old_nfs_mount>/itom/itom_vol/suite-install/yamlContent/kube-registry.yaml
kubectl delete -f /<old_nfs_mount>/itom/itom_vol/suite-install/yamlContent/itom-fluentd.yaml - Scale down deployments with these commands. Make sure you have noted original number of replicas for each deployment.
kubectl scale --replicas=0 -n core deployment/cdf-apiserver
kubectl scale --replicas=0 -n core deployment/idm
kubectl scale --replicas=0 -n core deployment/itom-vault
kubectl scale --replicas=0 -n core deployment/mng-portal
kubectl scale --replicas=0 -n core deployment/kube-registry
kubectl scale --replicas=0 -n core deployment/suite-conf-pod-arcsight-installer
kubectl scale --replicas=0 -n core deployment/suite-db
kubectl scale --replicas=0 -n core deployment/suite-installer-frontend
kubectl delete pod -n core <job_name>
- Verify if all Pods are deleted and not in terminating state by running this command:
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
- After make sure PV consumers list is returned empty:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-vol
- Copy the NFS data to a new mount point:
cp -rfp /mnt/<old_nfs_mount>/itom/itom_vol /mnt/<new_nfs_mount>/itom/itom_vol
- Check the mount content for any permissions discrepancies. The output of these commands must be identical:
ls -l /mnt/<old_nfs_mount>/itom/logging
ls -l /mnt/<new_nfs_mount>/itom/logging - Authorize PV change:
/opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure itom-vol -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/itom/itom_vol
- Repeat all the commands you used to scale down or destroy the pods to scale all replicas up or start up related daemonsets.
#kubectl scale --replicas=<value> -n core deployment/cdf-apiserver
kubectl scale --replicas=<value> -n core deployment/idm
kubectl scale --replicas=<value> -n core deployment/itom-vault
kubectl scale --replicas=<value> -n core deployment/mng-portal
kubectl scale --replicas=<value> -n core deployment/kube-registry
kubectl scale --replicas=<value> -n core deployment/suite-conf-pod-arcsight-installer
kubectl scale --replicas=<value> -n core deployment/suite-db
kubectl scale --replicas=<value> -n core deployment/suite-installer-frontend
kubectl create -f /<new_nfs_mount>/itom/itom_vol/suite-install/yamlContent/kube-registry.yaml
kubectl create -f /<new_nfs_mount>/itom/itom_vol/suite-install/yamlContent/itom-fluentd.yaml
- To restore path services, use this command:
kubectl create -f <PATH>
- Verify consumers have been restored with this command:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-vol
- Verify pods are all running:
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
- If all pods are running, verify OMT status:
/opt/arcsight/kubernetes/bin/kube-status.sh
Migrate PV db-single
- Determine the services using the db-single PV by running the following command. Note the number of replicas running for later scaleback, after the NFS migration:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search db-single
- Scale down the necessary deployments:
kubectl scale --replicas=0 -n core deployment/itom-postgresql-default
- Verify pods are not stuck in terminating state, and that afterwards no consumers are displayed:
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
#/opt/arcsight/kubernetes/scripts/volume_admin.sh search db-single - Copy the NFS data to a new mount point:
cp -rfp /mnt/<old_nfs_mount>/itom/db /mnt/<new_nfs_mount>/itom/db
- Check the mount content for any permissions discrepancies. The output of these commands must be identical:
ls -l /mnt/<old_nfs_mount>/itom/logging
ls -l /mnt/<new_nfs_mount>/itom/logging - Authorize the PV change by running this command:
/opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure db-single -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/itom/db
- Repeat all the commands you used to scale down or destroy the pods to scale all replicas up.
- Verify consumers have been restored with this command:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search db-single
- Verify pods are all running:
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
- If all pods are running, verify OMT status:
/opt/arcsight/kubernetes/bin/kube-status.sh
Migrate PV arcsight-installer-xxxxx-arcsight-volume
- Determine the services using the db-single PV by running the following command. Note the number of replicas running for later scaleback, after the NFS migration:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-arcsight-volume
- Scale down the necessary deployments with the following commands, in the listed order. Your list may vary depending on your Transformation Hub configuration. Note that between each scaledown command, you will run a
get pods
command as shown to make sure the scaledown has finished successfully, before proceeding to the next consumer.kubectl scale --replicas=0 -n arcsight-installer-xxxxx deployment/th-kafka-manager
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
kubectl scale --replicas=0 -n arcsight-installer-xxxxx deployment/th-schemaregistry
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
kubectl scale --replicas=0 -n arcsight-installer-xxxxx deployment/th-web-service
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
kubectl scale --replicas=0 -n arcsight-installer-xxxxx sts/th-routing-processor-group1
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
kubectl scale --replicas=0 -n arcsight-installer-xxxxx deployment/autopass-lm
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
- Run these commands in the listed order:
kubectl scale --replicas=0 -n arcsight-installer-xxxxx sts/th-kafka
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
#kubectl scale --replicas=0 -n arcsight-installer-xxxxx sts/th-zookeeper
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}' - Verify that no consumers are displayed for the PV by running the following command:
/opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-arcsight-volume
- Copy the NFS data to a new mount point:
cp -rfp /mnt/<old_nfs_mount>/itom/db /mnt/<new_nfs_mount>/itom/db
- Check the mount content for any permissions discrepancies. The output of these commands must be identical:
ls -l /mnt/<old_nfs_mount>/arcsight
ls -l /mnt/<new_nfs_mount>/arcsight - Authorize the PV change:
/opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure arcsight-installer-xxxxx-arcsight-volume -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/arcsight - Authorize PV change and verify the new server and volume are listed under “
nfs:
” section in the configuration:opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure arcsight-installer-xxxxx-arcsight-volume -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/arcsighT
kubectl get pv arcsight-installer-xxxxx-arcsight-volume -o yaml - Run the scale up commands in the order shown. After each scaleup, you will run the get pods command as shown to make sure nothing is in the crashing state.
kubectl scale --replicas=<value> -n arcsight-installer-xxxxx deployment/autopass-lm
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
kubectl scale --replicas=<value> -n arcsight-installer-xxxxx sts/th-zookeeper
#/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
kubectl scale --replicas=<value> -n arcsight-installer-xxxxx sts/th-kafka
/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}' - When all th-zookeeper and th-kafka nodes are in the running state, run these commands to scale up the rest of the PV consumers. Note that this list may vary depending on your configuration:
kubectl scale --replicas=<value> -n arcsight-installer-xxxxx deployment/th-kafka-manager
kubectl scale --replicas=<value> -n arcsight-installer-xxxxx deployment/th-schemaregistry
kubectl scale --replicas=<value> -n arcsight-installer-xxxxx deployment/th-web-service
kubectl scale --replicas=<value> -n arcsight-installer-xxxxx sts/th-routing-processor-group1 - Log into Kafka manager and verify topic assignment between brokers, and if all brokers are up and running.