Before you start working with a Cluster Services cluster, you should be familiar with the terms described in this section:
A cluster is a group 2 to 32 servers configured with OES Cluster Services so that data storage locations and applications can transfer from one server to another to provide high availability to users.
The unique static IP address for the cluster.
Each server in the cluster has its own unique static IP address.
The first server that comes up in an cluster is assigned the cluster IP address and becomes the master node. The master node monitors the health of the cluster nodes. It also synchronizes updates about the cluster to eDirectory. If the master node fails, Cluster Services migrates the cluster IP address to another server in the cluster, and that server becomes the master node. For information about how a new master is determined, see Section C.0, Electing a Master Node.
Any member node in the cluster that is not currently acting as the master node.
A small shared storage device where data is stored to help detect and prevent a split-brain situation from occurring in the cluster. If you use shared storage in the cluster, you must create an SBD for the cluster.
A split brain is a situation where the links between the nodes fail, but the nodes are still running. Without an SBD, each node thinks that the other nodes are dead, and that it should take over the resources in the cluster. Each node independently attempts to load the applications and access the data, because it does not know the other nodes are doing the same thing. Data corruption can occur. An SBD’s job is to detect the split-brain situation, and allow only one node to take over the cluster operations.
Disks or LUNs attached to nodes in the cluster via SCSI, Fibre Channel, or iSCSI fabric. Only devices that are marked as shareable for clustering can be cluster-enabled.
A cluster resource is a single, logical unit of related storage, application, or service elements that can be failed over together between nodes in the cluster. The resource can be brought online or taken offline on one node at a time.
Each cluster resource in the cluster has its own unique static IP address.
An abstraction of a cluster resource that provides location independent access for users to the service or data. The user is not aware of which node is actually hosting the resource. Each cluster resource has a virtual server identity based on its resource IP address. A name for the virtual server can be bound to the resource IP address.
A resource template contains the default load, unload, and monitor scripts and default settings for service or file system cluster resources. Resource templates are available for the following OES services and file systems:
OES DHCP
OES DNS
Generic file system (for LVM-based Linux POSIX volumes)
NSS file system (for NSS pool resources)
Generic IP service
OES iPrint
MySQL
Novell Samba
CIS_Scale_Template
CIS_Template
Personalized templates can also be created. See Section 10.3, Using Cluster Resource Templates.
An application or OES service that has been cluster-enabled. The application or service is installed on all nodes in the cluster where the resource can be failed over. The cluster resource includes scripts for loading, unloading, and monitoring. The resource can also contain the configuration information for the application or service.
A cluster-enabled OES Storage Services pool. Typically, the shared pool contains only one NSS volume. The file system must be installed on all nodes in the cluster where the resource can be failed over. The NSS volume is bound to an NCS Virtual Server object (NCS:NCP Server) and to the resource IP address. This provides location independent access to data on the volume for NCP and OES CIFS clients.
A cluster-enabled Linux POSIX volume. The volume is bound to the resource IP address. This provides location-independent access to data on the volume via native Linux protocols such as FTP. You can optionally create an NCS Virtual Server object (NCS:NCP Server) for the resource as described in Section 13.5, Creating a Virtual Server Object for an LVM Volume Group Cluster Resource.
An NCP volume (or share) that has been created on top of a cluster-enabled Linux POSIX volume. The NCP volume is re-created by a command in the resource load script whenever the resource is brought online. The NCP volume is bound to an NCS Virtual Server object (NCS:NCP Server) and to the resource IP address. This provides location-independent access to the data on the volume for NCP clients in addition to the native Linux protocols such as FTP. You must create an NCS Virtual Server object (NCS:NCP Server) for the resource as described in Section 13.5, Creating a Virtual Server Object for an LVM Volume Group Cluster Resource.
A cluster-enabled Dynamic Storage Technology volume made up of two shared NSS volumes. Both shared volumes are managed in the same cluster resource. The primary volume is bound to an NCS Virtual Server object (NCS:NCP Server) and to the resource IP address. This provides location independent access to data on the DST volume for NCP and OES CIFS clients.
Each cluster resource has a set of scripts that are run to load, unload, and monitor a cluster resource. The scripts can be personalized by using the Clusters plug-in for iManager.
A signal sent between a slave node and the master node to indicate that the slave node is alive. This helps to detect a node failure.
The administrator-specified number of nodes that must be up and running in the cluster before cluster resources can begin loading.
One or more administrator-specified nodes in the cluster that can be used for a resource. The order of nodes in the Preferred Nodes list indicates the failover preference. Any applications that are required for a cluster resource must be installed and configured on the assigned nodes.
The administrator-specified priority order that resources should be loaded on a node.
Administrator-specified groups of resources that should not be allowed to run on the same node at the same time. This Clusters plug-in feature is available only for clusters running OES 2 SP3 and later.
The process of automatically moving cluster resources from a failed node to an assigned functional node so that availability to users is minimally interrupted. Each resource can be failed over to the same or different nodes.
A configuration of the preferred nodes that are assigned for cluster resources so that each resource that is running on a node can fail over to different secondary nodes.
The process of returning cluster resources to their preferred primary node after the situation that caused the failover has been resolved.
Manually triggering a move for a cluster resource from one node to another node for the purpose of performing maintenance on the old node, to temporarily lighten the load on the old node, and so on.
A node leaves the cluster temporarily for maintenance. The resources on the node are cluster migrated to other nodes in their preferred nodes list.
A node that has previously left the cluster rejoins the cluster.