We strongly recommend that you delete a cluster-enabled pool only from the master node in the cluster. This allows the cluster information to be automatically updated.
WARNING:Deleting a pool destroys all data on it.
NSS management tools delete the cluster-enabled pool and its related objects in eDirectory:
Pool and its volumes from the file system and from NCP
Cluster-named Pool object
Cluster-named Volume object for each of the pool’s volumes
Cluster Resource object for the pool
Virtual server for the cluster resource (NCS:NCP Server object)
When the pool resides on the master node, the cluster information is automatically updated.
When the pool resides on a non-master node, additional steps are required to update the cluster information. A cluster restart might be needed to force the information to be updated.
Use the following procedures to delete a cluster-enabled pool:
If the pool cluster resource is on a non-master node in the cluster, migrate it to the master node. As the root user, open a terminal console, then enter
cluster migrate <resource_name> <master_node_name>
To migrate the resource, the master node must be in the resource’s preferred nodes list. To view or modify the list, see Section 10.10, Configuring Preferred Nodes and Node Failover Order for a Resource.
Use the cluster status command to check the resource status. If the resource is online or comatose, take it offline by using one of the following methods:
As the root user, enter
cluster offline <resource_name>
Use the cluster status command to verify that the resource has a status of Offline before you continue.
In iManager, go to Clusters > My Clusters, then select the cluster. On the Cluster Manager page, select the check box next to the cluster resource, then click Offline.
Refresh the Cluster Manager page to verify that the resource has a status of Offline before you continue.
Delete the pool on the master node by using NSSMU.
You can alternatively use the Storage plug-in in iManager or the nlvm delete pool <pool_name> command.
In NSSMU, select Pools, then press Enter.
Select the deactive pool, then press Delete.
Select OK to confirm, then press Enter.
In the Tree View in iManager, browse the objects to verify that the following objects were deleted:
Pool object
Volume object (for each volume in the pool)
Pool cluster resource object (from the Cluster container)
Virtual server for the resource (NCS:NCP Server object)
(Optional) Unshare the device:
In iManager, go to Storage > Devices, then select the node where you want the unshared device to reside.
Select the device.
Deselect the Shareable for Clustering check box, then click Apply.
Unsharing a device fails if the device contains a cluster-enabled pool or split-brain detector (SBD) partition. This is unlikely to be an issue if you used a dedicated device (or devices) for the cluster-enabled pool you deleted.
Repeat Step 5.b to Step 5.c for each device that contributes space to the pool.
Use a third-party SAN management tool to assign the devices to only the desired server.
Log in as the root user to the non-master node where the cluster resource currently resides, then open a terminal console.
Use the cluster status command to check the resource status. If the resource is online or comatose, take it offline by using one of the following methods:
cluster offline <resource_name>
Use the cluster status command to verify that the resource has a status of Offline before you continue.
At the command prompt on the non-master node, enter
/opt/novell/ncs/bin/ncs-configd.py -init
Look at the file /var/opt/novell/ncs/resource-priority.conf to verify that it has the same information (REVISION and NUMRESOURCES) as the file on the master node.
Delete the pool on the non-master node by using NSSMU.
You can alternatively use the Storage plug-in in iManager or the nlvm delete pool <pool_name> command.
In NSSMU, select Pools, then press Enter.
Select the pool, then press Delete.
Select OK to confirm, then press Enter.
In the Tree View in iManager, browse the objects to verify that the following objects were deleted:
Pool object
Volume object (for each volume in the pool)
Pool cluster resource object (from the Cluster container)
Virtual server for the resource (NCS:NCP Server object)
On the master node, log in as the root user, open a terminal console, then enter
/opt/novell/ncs/bin/ncs-configd.py -init
Look at the file /var/opt/novell/ncs/resource-priority.conf to verify that it has the same information (REVISION and NUMRESOURCES) as that of the non-master node where you deleted the cluster resource.
In iManager, select Clusters > My Clusters, select the cluster, then select the Cluster Options tab.
Click Properties, select the Priorities tab, then click Apply on the Priorities page.
At a command prompt, enter
cluster view
The cluster view should be consistent.
Look at the file /var/opt/novell/ncs/resource-priority.conf on the master node to verify that the revision number increased.
If the revision number increased, skip Step 13 and continue with Step 14.
If the deleted resource is the only one in the cluster, the priority won’t force the update. A phantom resource might appear in the interface. You need to restart Cluster Services to force the update, which also removes the phantom resource.
If the revision number did not automatically update in the previous steps, restart OES Cluster Services by entering the following on one node in the cluster:
cluster restart [seconds]
For seconds, specify a value of 60 seconds or more.
For example:
cluster restart 120
(Optional) Unshare the device:
In iManager, go to Storage > Devices, then select the node where you want the unshared device to reside.
Select the device.
Deselect the Shareable for Clustering check box, then click Apply.
Unsharing a device fails if the device contains a cluster-enabled pool or split-brain detector (SBD) partition. This is unlikely to be an issue if you used a dedicated device (or devices) for the cluster-enabled pool you deleted.
Repeat Step 14.b to Step 14.c for each device that contributes space to the pool.
Use a third-party SAN management tool to assign the devices to only the desired server.