Removing the Cluster
You can remove the cluster to make it possible to perform a fresh installation of OMT and the ArcSight Suite. As part of the removal, you will remove or clean some cluster-specific resources. However, some of your existing resources are reusable.
Reusable Resources
Many of the resources created during your OMT and ArcSight Suite installation are reusable, and do not need to be removed during the cluster removal. You might find it useful to keep such resources on hand for use with other product suites.
- Launch template is not dependent on the OMT installation and is not bound to a VPC. Other users performing installation in the same region could re-use an already existing launch template.
- Bastion instance is bound to the VPC created for OMT, and can be used only within this VPC. However, a bastion is a highly re-usable resource for installing and managing other clusters or product suites.
- The Route 53 record set, with its certificate, is not dependent on the installation.
Cluster Removal
As part of removing the cluster, you will perform the following tasks:
- Removal of the Auto Scaling group
- Removal of the EKS control plane
- Cleaning or deleting the EFS/NFS
Each of these procedures is explained below.
Removing the Auto Scaling Group
The AWS Auto Scaling group holds the worker nodes instances. Accordingly, in order to delete the worker nodes, you must delete the Auto Scaling group.
To delete the Auto Scaling group:
- Run the following command:
aws autoscaling delete-auto-scaling-group --force-delete --auto-scaling-group-name <auto-scaling group name from AWS worksheet>
- The command has no output, and in the background the deletion instances will start. Check the presence of the Auto-Scaling group by running the following command:
aws autoscaling describe-auto-scaling-groups \
| jq -r '.AutoScalingGroups[] | select(.AutoScalingGroupName=="<auto-scaling group name>") | .AutoScalingGroupName'
- Once the auto-scaling group and worker nodes are removed, you can check the pods by executing this command on the bastion:
kubectl get pods -A -o wide
All pods are shown in the Pending state, as they do not have a host to run on, but the Kubernetes control plane still has the cluster definition.
If desired, you can create another Auto Scaling group with a different launch template. All the pods will be deployed and started on the new worker nodes. Remember to add respective targets to Target Groups. Any new worker nodes will receive new instance IDs.
Removing the EKS Control Plane
The Kubernetes control plane holds the definitions of services, daemons, deployments, pods, and other resources, including the fully qualified identifier of Docker images in the registry. To clean the AWS infrastructre for a new installation, this control plane needs to be removed as well.
To remove the EKS control plane:
- Run the following command:
aws eks delete-cluster --name <cluster name from AWS worksheet>
-
Verify the cluster has been deleted by running the command:
aws eks list-clusters | jq -r '.clusters[] | select(.=="<cluster name from AWS worksheet>")'
An empty output indicates that the cluster has been deleted.
Example output of a cluster in the process of being deleted:
{ "cluster":{ "name":"srgdemo-cluster", "arn":"arn:aws:eks:eu-central-1:115370811111:cluster/srgdemo-cluster", "createdAt":"2020-08-10T12:13:31.748000+02:00", "version":"1.26", "endpoint":"https://90842F339FC27B9BE1DD0554E508B914.gr7.eu-central-1.eks.amazonaws.com", "roleArn":"arn:aws:iam::115370811111:role/ARST-EKS-Custom-Role", "resourcesVpcConfig":{ "subnetIds":[ "subnet-0fb2ebb5882c061f0", "subnet-0abd7cd806e04c7be", "subnet-0f0cac4ec6837abed" ], "securityGroupIds":[ "sg-0ce3c569f73737b77" ], "clusterSecurityGroupId":"sg-0263ae0d4c33decc4", "vpcId":"vpc-0143197ca9bd9c117", "endpointPublicAccess":false, "endpointPrivateAccess":true, "publicAccessCidrs":[ ] }, "logging":{ "clusterLogging":[ { "types":[ "api", "audit", "authenticator", "controllerManager", "scheduler" ], "enabled":false } ] }, "identity":{ "oidc":{ "issuer":"https://oidc.eks.eu-central-1.amazonaws.com/id/90842F339FC27B9BE1DD0554E508B914" } }, "status":"DELETING", "certificateAuthority":{ "data":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3hNREV3TWpFd01sb1hEVE13TURnd09ERXdNakV3TWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSnZXCnUyMDVNZFJkWUV1VkliQU1yeFJzKzFSMmtyRlhWUmpZd0ZXQUdIRUY2WmJ6V1F2L2Y0d052MmlxaFM0Q0lJa2wKVTVvTmtaTzFBaU9USk9Ja1l3UFAwdjRGQkNyVFlvU3BldW1xelhqVFBHU2JFUVJ4OXFVM0ttTkorUXlSZEhJeQpaTHV6b2tXbXJXSG1TVlRLNUxkZUppN3Z4enoweU1TNzczL01GK3FkcVNML2o1dHJTNEt2cU5ObVRKMEVVY0hwCjdWNklENnFaSEVxZXdKQjl2cmhPdGFlc05TMFdhVWwwUFU4d3pWaFVUWUlEMllFTU8rOXFsZEdVQVlWTmo3cVIKMUdXVVNVZVVIUWJqNEViMHg4VGhjcDNPYi9oZUNQWWZ1Rno5MVRWUUR5enRxaDZtUDQvNXFZaW1QeklkaFh5LwpIdDltVmZ3M0tVemlzMURtNk9VQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDU2pyZG5Xb0N1WTA4c3pqVU5BSHdnbnFtMDgKZlhydkxtVkxzSHZiZHFSTmorUTJQMFQvVCtFZFRVWFg5SGNia1JwQU5QNTRkNzRQRmJGbzA0K0dmaTYrTHE5UAoyYlBzZ2o3Mmo4WWx0V0twVHJiNFpKMnhyZXFsWnZ4MVFZNHpZWUhKdDdKZ1RRaU4xQ2JjaFZLR0V6K09nQ3ZTClZGMWE2OEJJajlUMFFDNXgzTTJncHdDa1JMOHArbXkzbkp0Z281Q0JHanhGU2ZHNnN3M0ZMRXdlRHQyc2dOc1UKV2hpQWZGQmtPdUl2OENmMmlwMUZYQ2toWjJxTXdYanU4UzFFc3Z3bUcrSy9vd3NiOUFLZG5TaVRQVXJSQWdGbwpsVjBrSGVaK1FpSG5wK0t3a1NpbkoyMVpXRUFMVG5GRjBCR3hYMDhpU1cwM25Kcy9XemRFdTVFWWhUYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" }, "platformVersion":"eks.1", "tags":{ "user":"user" } } }
Cleaning or deleting NFS/EFS
Your NFS/EFS is a partially reusable resource. For the EFS you created, you have the following options:
- Leave the existing EFS folder structure intact. A new installation will use a parallel structure. No action needs to be taken.
- Delete and re-create the folder structure during a new installation. The procedure is discussed below.
- Delete the EFS instance completely. The procedure is discussed below.
To delete the folder structure:
- Log on to the bastion host.
- Unmount the EFS file system by running the following command:
sudo umount -f /mnt/efs
- As a
sudo
user, open the file/etc/fstab
in a text editor. - Locate the following line:
fs-5df66605.efs.eu-central-1.amazonaws.com:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0
- Uncomment the line and then save the file.
- Mount EFS to the bastion by running the following command:
sudo mount -a
- Change your current working directory to the mount point, in our case
/mnt/efs
. - Delete the whole folder structure by running the following command:
sudo rm -Rf <parent folder from AWS worksheet>
To delete the EFS instance (not required for re-installing the OMT bootstrap):
- Delete the mount targets by running the following command on each configured mount target:
aws efs delete-mount-target \
--mount-target-id <mount target id from AWS worksheet> - Verify the deletion by running the following command:
aws efs describe-mount-targets \
--file-system-id <filesystem Id from AWS worksheet>
Example output:
{ "MountTargets":[ ] }
- Delete the filesystem by running the command:
aws efs delete-file-system --file-system-id <filesystem Id from AWS worksheet>
Next Steps
At this point the filesystem has been deleted. As explained above, some reusable resources will remain.
-
The Application Load Balancer, its listeners, and target groups are not dependent on installation. For a new installation, you will need to add new targets to all target groups.
-
The VPC tag marking the EKS cluster has been removed, and the required tag
kubernetes.io/cluster/<cluster name>
has been removed as well. Remember to add it before new installation.
You can now perform a clean installation of a new cluster.