You can re-create the resource objects for a pool by deleting the resource-related objects, creating new Pool and Volume objects, and then cluster-enabling the existing pool. This is the same as disabling and re-enabling clustering for a pool. We strongly recommend that you delete the resource for a pool from the master node in the cluster.
The Cluster Options page of the Clusters plug-in for iManager provides a Delete option that automatically deletes the resource-related objects in eDirectory and updates the cluster information:
Cluster-named Pool object
Cluster-named Volume object for each of the pool’s volumes
Cluster Resource object for the pool
Virtual server for the cluster resource (NCS:NCP Server object)
This deletes the resource-related objects, but not the storage area they represent. To delete a resource and create a new one with the same name, you must wait to create the new one until eDirectory synchronizes all of the objects in the tree related to the deleted resource.
Ensure that you offline the cluster resource before attempting to delete either the cluster resource or the clustered pool. For example, if you want to unshare a pool, offline the cluster resource for the pool before you mark the device as Not Shareable for Clustering. Then you can delete the eDirectory object for the cluster resource.
WARNING:If you attempt to delete a cluster resource without first offlining it, deletion errors occur, and the data associated with the clustered pool is not recoverable.
All resource configuration must happen from the master node. On the Cluster Options page for iManager, you are automatically connected to the Cluster object, which is associated with the master node. On the Storage > Pools page for iManager, connect to the Cluster object, not to the individual servers. Run NSSMU only on the master node.
Use the following procedure to delete and re-create a cluster resource for a pool:
If the resource is on a non-master node in the cluster, migrate it to the master node.
As the root user, open a terminal console, then enter
cluster migrate <resource_name> <master_node_name>
The master node must be in the resource’s preferred nodes list. To view or modify the list, see Section 10.10, Configuring Preferred Nodes and Node Failover Order for a Resource.
Use the cluster status command to check the resource status. If the resource is online or comatose, take it offline by using one of the following methods:
Enter the following at the command prompt as the root user:
cluster offline <resource_name>
Use the cluster status command to verify that the resource has a status of Offline before you continue.
In iManager, go to Clusters > My Clusters, then select the cluster. On the Cluster Manager page, select the check box next to the cluster resource, then click Offline.
Refresh the Cluster Manager page to verify that the resource has a status of Offline before you continue.
In iManager, use the Clusters plug-in to delete the cluster resource.
Select Clusters > My Clusters, then select the cluster.
Select the Cluster Options tab.
Select the check box next to the resource, then click Delete.
When you are prompted to confirm the deletion, click OK to continue, or click Cancel to abort the deletion.
In the Tree View in iManager, browse to verify that the Cluster Resource objects and related objects were removed from eDirectory.
If necessary, you can manually delete the objects. In iManager, go to Directory Administration > Delete Objects, select the objects, then click OK.
Use the Update eDirectory function to re-create Storage objects for the pool and its volumes.
These objects are needed by OES Cluster Services when you re-create the resource.
In iManager, select Storage > Pools, then select the master node if you plan to re-create the storage object, or select the node where you want the pool to reside as a locally available pool.
Select the pool, then click Activate.
Select the pool, then click Update eDirectory.
This creates a Pool object in eDirectory with a name format of <server_name>_<pool_name>_POOL.
Select Storage > Volumes. The server should still be selected.
Select the volume, then click Mount.
Select the volume, then click Update eDirectory.
This creates a Volume object in eDirectory with a name format of <server_name>_<volume_name>.
Repeat Step 4.e through Step 4.f for each volume in the pool.
In the Tree View in iManager, browse to verify that the Pool object and Volume object were created.
Use the Clusters plug-in to cluster-enable the pool:
For detailed instructions, see Step 5 thru Step 18 in Section 12.5, Cluster-Enabling an Existing NSS Pool and Its Volumes.
To re-create the cluster resource with the same name, you must wait to create the new one until eDirectory synchronizes all of the objects in the tree related to the deleted resource.
In Roles and Tasks, select Clusters > My Clusters, select the cluster.
Select the Cluster Options tab.
On the Cluster Options page, click the New link in the Cluster Objects toolbar.
On the Resource Type page, select the Pool radio button, then click Next.
On the Cluster Pool Information page, specify the following information, then click Next:
Pool name
Virtual server name
IP address
Advertising protocols (NCP, CIFS)
If you enable CIFS, specify the CIFS Server name.
Deselect the Online Resource after Creation check box.
Select the Define Additional Properties check box.
On the Resource Policies page, configure the resource policies for the start, failover, and failback modes, then click Next.
See Configuring the Start, Failover, and Failback Modes for Cluster Resources.
On the Resource Preferred Nodes page, assign the preferred nodes to use for the resource, then click Finish.
See Configuring Preferred Nodes and Node Failover Order for a Resource.
The pool cluster resource appears in the Cluster Objects list on the Cluster Options page, such as POOL1_SERVER.
(Optional) View the resource properties, and enable monitoring and configure the monitoring script.
Bring the resource online. Select the Cluster Manager tab, select the check box next to the resource, then click Online.
The pool is activated and its volumes are mounted on the primary preferred node that is configured for the pool cluster resource.
If the pool goes comatose, take it offline, check that the pool is deactivated on the local server, then try again.