Migrating the NFS Server to a New Location

The process given here explains how to migrate your NFS server and paths to another location (including changing paths within the same NFS server). During the move, some of the exported path pods from the core namespace will incur downtime as they are scaled to zero or temporarily removed. The OMT Management Portal (and all of its features) will not be available during such downtime.

Data will be copied before being transferred, so that the original location should remain as a backup until the procedure is complete and the cluster successfully back to operation, with pods restarted with new paths and the new NFS server.

This procedure will be executed on your primary master node, with access to thekubectl command and the contents of /opt/arcsight/kubernetes

The procedures uses the volume_admin.sh script located in /opt/arcsight/kubernetes/scripts

Usage:./volume_admin.sh <Operation> <Persistent Volume> <Options>

Where options include:

Preparation

  1. Verify that all pods are running correctly with the following command:
    kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
  2. Verify status of OMT installation with the following command:
    /opt/arcsight/kubernetes/bin/kube-status.sh
  3. Prepare the new NFS volumes with the same permission set as the existing volumes.

4. Get an overview of persistent volumes for your installation with the following command:
kubectl get pv

Migration Procedures

The recommended order in which migration should be executed on your persistent volumes is as follows:

  1. itom-logging
  2. arcsight-installer-xxxxx-db-backup-vol
  3. itom-vol
  4. db-single
  5. arcsight-installer-xxxxx-arcsight-volume

In any of the following commands, <old_nfs_mount> and <new_nfs_mount> refer to manually-mounted NFS for copying or maintenance procedures, and <new_nfs_path> refers to the real path on the NFS server of the mount point for the PV change command.

If any PV change fails, roll back any changes to the old NFS location until the issue is resolved. Do not leave your cluster in a change-pending state.

Migrate PV itom-logging

  1. Determine the services using the itom-logging PV by running the following command. (Note the number of replicas running for later scaleback, after the NFS migration):
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-logging
Note: For fluentd, the YAML definition will include an NFSpath. You will need to mount it on a temporary mount to delete (and later to create) it with the following command:
kubectl delete -f /<old_nfs_mount>/itom/itom_vol/suite-install/yamlContent/itom-fluentd.yaml
  1. Scale down other services by running these commands:
    kubectl scale --replicas=0 -n core deployment/idm
    kubectl scale --replicas=0 -n core deployment/itom-logrotate-deployment
  2. Verify all pods of interest are deleted by running this command:
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
  3. Verify that consumers have been removed from the PV users list:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-logging
  4. Copy NFS data to new mount point:
    cp -rfp /mnt/<old_nfs_mount>/itom/logging /mnt/<new_nfs_mount>/itom/logging
  5. Check the content of mount for any permissions discrepancies. The output of these commands must be identical:
    ls -l /mnt/<old_nfs_mount>/itom/logging
    ls -l /mnt/<new_nfs_mount>/itom/logging
  6. Authorize the PV change by running this command:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure itom-logging -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/itom/logging
  7. Verify the new NFS path in the configuration by running the following command:
    kubectl get pv itom-logging -o yaml
  8. For the previous command, locate the nfs: section of the output. It should list the new server and volume.
  9. Repeat all the commands you used to scale down or destroy the pods to scale all replicas up or start up related daemonsets.
  10. Recreate the daemonset from the YAML with these commands. (Note that this will be still old path until itom_vol PV is migrated.)
    kubectl create -f /<old_nfs_mount>/itom/itom_vol/suite-install/yamlContent/itom-fluentd.yaml
    kubectl scale --replicas=<value> -n core deployment/idm
    kubectl scale --replicas=<value> -n core deployment/itom-logrotate-deployment
  11. Verify that consumers have been restored with this command:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-logging
  12. Verify pods are all running:
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
  13. If all pods are running, verify OMT status:
    /opt/arcsight/kubernetes/bin/kube-status.sh

Migrate PV arcsight-installer-xxxxx-db-backup-vol

Some additional checks are omitted from this procedure, but should be run as in the procedure above, to make sure no discrepancies arise.
  1. Determine the services using the arcsight-installer-xxxxx-db-backup-vol PV by running the following command. Note the number of replicas running for later scaleback, after the NFS migration:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-db-backup-vol
  2. Scale down the necessary deployments:
    kubectl scale --replicas=0 deployment/itom-pg-backup -n arcsight-installer-xxxxx
  3. Verify that consumers have been removed:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-db-backup-vol
  4. Copy the NFS data to a new mount point:
    cp -rfp /mnt/<old_nfs_mount>/itom/db_backup /mnt/<new_nfs_mount>/itom/db_backup
  5. Check the mount content for any permissions discrepancies. The output of these commands must be identical:
    ls -l /mnt/<old_nfs_mount>/itom/logging
    ls -l /mnt/<new_nfs_mount>/itom/logging
  6. Authorize the PV change:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure arcsight-installer-xxxxx-db-backup-vol -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/itom/db_backup
  7. Repeat all the commands you used to scale down or destroy the pods to scale all replicas up or start up related daemonsets.
    kubectl scale --replicas=<value> deployment/itom-pg-backup -n arcsight-installer-xxxxx
To restore path services, use this command:
kubectl create -f <PATH>
  1. Verify consumers have been restored with this command:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-db-backup-vol
  2. Verify pods are all running:
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
  3. If all pods are running, verify OMT status:
    /opt/arcsight/kubernetes/bin/kube-status.sh

Migrate PV itom-vol

  1. Determine the services using the itom-vol PV by running the following command. Note the number of replicas running for later scaleback, after the NFS migration:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-vol
  2. Delete the YAML-based daemonsets by running these commands:
    kubectl delete -f /<old_nfs_mount>/itom/itom_vol/suite-install/yamlContent/kube-registry.yaml
    kubectl delete -f /<old_nfs_mount>/itom/itom_vol/suite-install/yamlContent/itom-fluentd.yaml
  3. Scale down deployments with these commands. Make sure you have noted original number of replicas for each deployment.
    kubectl scale --replicas=0 -n core deployment/cdf-apiserver
    kubectl scale --replicas=0 -n core deployment/idm
    kubectl scale --replicas=0 -n core deployment/itom-vault
    kubectl scale --replicas=0 -n core deployment/mng-portal
    kubectl scale --replicas=0 -n core deployment/kube-registry
    kubectl scale --replicas=0 -n core deployment/suite-conf-pod-arcsight-installer
    kubectl scale --replicas=0 -n core deployment/suite-db
    kubectl scale --replicas=0 -n core deployment/suite-installer-frontend
Note: Any consumer jobs displayed during the listing are just temporary one-time actions and can be deleted by kubectl delete pod -n core <job_name>
  1. Verify if all Pods are deleted and not in terminating state by running this command:
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
  2. After make sure PV consumers list is returned empty:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-vol
  3. Copy the NFS data to a new mount point:
    cp -rfp /mnt/<old_nfs_mount>/itom/itom_vol /mnt/<new_nfs_mount>/itom/itom_vol
  4. Check the mount content for any permissions discrepancies. The output of these commands must be identical:
    ls -l /mnt/<old_nfs_mount>/itom/logging
    ls -l /mnt/<new_nfs_mount>/itom/logging
  5. Authorize PV change:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure itom-vol -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/itom/itom_vol
  6. Repeat all the commands you used to scale down or destroy the pods to scale all replicas up or start up related daemonsets.
    #kubectl scale --replicas=<value> -n core deployment/cdf-apiserver
    kubectl scale --replicas=<value> -n core deployment/idm
    kubectl scale --replicas=<value> -n core deployment/itom-vault
    kubectl scale --replicas=<value> -n core deployment/mng-portal
    kubectl scale --replicas=<value> -n core deployment/kube-registry
    kubectl scale --replicas=<value> -n core deployment/suite-conf-pod-arcsight-installer
    kubectl scale --replicas=<value> -n core deployment/suite-db
    kubectl scale --replicas=<value> -n core deployment/suite-installer-frontend
    kubectl create -f /<new_nfs_mount>/itom/itom_vol/suite-install/yamlContent/kube-registry.yaml
    kubectl create -f /<new_nfs_mount>/itom/itom_vol/suite-install/yamlContent/itom-fluentd.yaml
  1. To restore path services, use this command:
    kubectl create -f <PATH>
  2. Verify consumers have been restored with this command:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search itom-vol
  3. Verify pods are all running:
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
  4. If all pods are running, verify OMT status:
    /opt/arcsight/kubernetes/bin/kube-status.sh

Migrate PV db-single

  1. Determine the services using the db-single PV by running the following command. Note the number of replicas running for later scaleback, after the NFS migration:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search db-single
  2. Scale down the necessary deployments:
    kubectl scale --replicas=0 -n core deployment/itom-postgresql-default
  3. Verify pods are not stuck in terminating state, and that afterwards no consumers are displayed:
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
    #/opt/arcsight/kubernetes/scripts/volume_admin.sh search db-single
  4. Copy the NFS data to a new mount point:
    cp -rfp /mnt/<old_nfs_mount>/itom/db /mnt/<new_nfs_mount>/itom/db
  5. Check the mount content for any permissions discrepancies. The output of these commands must be identical:
    ls -l /mnt/<old_nfs_mount>/itom/logging
    ls -l /mnt/<new_nfs_mount>/itom/logging
  6. Authorize the PV change by running this command:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure db-single -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/itom/db
  7. Repeat all the commands you used to scale down or destroy the pods to scale all replicas up.
  8. Verify consumers have been restored with this command:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search db-single
  9. Verify pods are all running:
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
  10. If all pods are running, verify OMT status:
    /opt/arcsight/kubernetes/bin/kube-status.sh

Migrate PV arcsight-installer-xxxxx-arcsight-volume

  1. Determine the services using the db-single PV by running the following command. Note the number of replicas running for later scaleback, after the NFS migration:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-arcsight-volume
  2. Scale down the necessary deployments with the following commands, in the listed order. Your list may vary depending on your Transformation Hub configuration. Note that between each scaledown command, you will run a get pods command as shown to make sure the scaledown has finished successfully, before proceeding to the next consumer.
    kubectl scale --replicas=0 -n arcsight-installer-xxxxx deployment/th-kafka-manager
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
    kubectl scale --replicas=0 -n arcsight-installer-xxxxx deployment/th-schemaregistry
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
    kubectl scale --replicas=0 -n arcsight-installer-xxxxx deployment/th-web-service
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
    kubectl scale --replicas=0 -n arcsight-installer-xxxxx sts/th-routing-processor-group1
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
    kubectl scale --replicas=0 -n arcsight-installer-xxxxx deployment/autopass-lm
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
Note: Scaling down can take some time. Please be patient, as this is normal behavior.
  1. Run these commands in the listed order:
    kubectl scale --replicas=0 -n arcsight-installer-xxxxx sts/th-kafka
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
    #kubectl scale --replicas=0 -n arcsight-installer-xxxxx sts/th-zookeeper
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
  2. Verify that no consumers are displayed for the PV by running the following command:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh search arcsight-installer-xxxxx-arcsight-volume
  1. Copy the NFS data to a new mount point:
    cp -rfp /mnt/<old_nfs_mount>/itom/db /mnt/<new_nfs_mount>/itom/db
  2. Check the mount content for any permissions discrepancies. The output of these commands must be identical:
    ls -l /mnt/<old_nfs_mount>/arcsight
    ls -l /mnt/<new_nfs_mount>/arcsight
  3. Authorize the PV change:
    /opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure arcsight-installer-xxxxx-arcsight-volume -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/arcsight
  4. Authorize PV change and verify the new server and volume are listed under “nfs:” section in the configuration:
    opt/arcsight/kubernetes/scripts/volume_admin.sh reconfigure arcsight-installer-xxxxx-arcsight-volume -t nfs -s <new_nfs_FQDN_or_IP> -p /<new_nfs_path>/arcsighT
    kubectl get pv arcsight-installer-xxxxx-arcsight-volume -o yaml
  5. Run the scale up commands in the order shown. After each scaleup, you will run the get pods command as shown to make sure nothing is in the crashing state.
    kubectl scale --replicas=<value> -n arcsight-installer-xxxxx deployment/autopass-lm
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
    kubectl scale --replicas=<value> -n arcsight-installer-xxxxx sts/th-zookeeper
    #/opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
    kubectl scale --replicas=<value> -n arcsight-installer-xxxxx sts/th-kafka
    /opt/arcsight/kubernetes/bin/kubectl get pods --all-namespaces -o wide | awk -F " *|/" '($3!=$4 || $5!="Running") && $5!="Completed" {print $0}'
  6. When all th-zookeeper and th-kafka nodes are in the running state, run these commands to scale up the rest of the PV consumers. Note that this list may vary depending on your configuration:
    kubectl scale --replicas=<value> -n arcsight-installer-xxxxx deployment/th-kafka-manager
    kubectl scale --replicas=<value> -n arcsight-installer-xxxxx deployment/th-schemaregistry
    kubectl scale --replicas=<value> -n arcsight-installer-xxxxx deployment/th-web-service
    kubectl scale --replicas=<value> -n arcsight-installer-xxxxx sts/th-routing-processor-group1
  7. Log into Kafka manager and verify topic assignment between brokers, and if all brokers are up and running.