39.2 Shared Storage Setup

Set up your shared storage and ensure that you can mount it on each cluster node. If you are using FibreChannel and a SAN, you might need to provide physical connections as well as additional configuration. Sentinel uses this shared storage to store the databases and event data. Ensure that the shared storage is appropriately sized accordingly based on the expected event rate and data retention policies.

Consider the following example of a shared storage setup:

A typical implementation might use a fast SAN attached using FibreChannel to all the cluster nodes, with a large RAID array to store the local event data. A separate NAS or iSCSI node might be used for the slower secondary storage. As long as the cluster node can mount the primary storage as a normal block device, it can be used by the solution. The secondary storage can also be mounted as a block device, or could be an NFS or CIFS volume.

NOTE:Configure your shared storage and test mounting it on each cluster node. However, the cluster configuration will handle the actual mount of the storage.

Perform the following procedure to create iSCSI Targets hosted by a SLES virtual machine:

  1. Connect to storage03, the virtual machine you created during Initial Setup, and start a console session.

  2. Run the following command to create a blank file of any desired size for Sentinel primary storage:

    dd if=/dev/zero of=/localdata count=<file size> bs=<bit size>

    For example, run the following command to create a 20GB file filled with zeros copied from the /dev/zero pseudo-device:

    dd if=/dev/zero of=/localdata count=20480000 bs=1024
  3. Repeat steps 1 and 2 to create a file for secondary storage in the same way.

    For example, run the following command for the secondary storage:

    dd if=/dev/zero of=/networkdata count=20480000 bs=1024

NOTE:For this example you created two files of the same size and performance characteristics to represent the two disks. For a production deployment, you can create the primary storage on a fast SAN and the secondary storage on a slower iSCSI, NFS, or CIFS volume.

Perform the steps provided in the following sections to configure iSCSI target and initiator devices:

39.2.1 Configuring iSCSI Targets

Perform the following procedure to configure localdata and networkdata files as iSCSI Targets.

For more information about configuring iSCSI targets, see Creating iSCSI Targets with YaST in the SUSE documentation.

  1. Run YaST from the command line (or use the Graphical User Interface, if preferred): /sbin/yast

  2. Select Network Devices > Network Settings.

  3. Ensure that the Overview tab is selected.

  4. Select the secondary NIC from the displayed list, then tab forward to Edit and press Enter.

  5. On the Address tab, assign a static IP address of 10.0.0.3. This will be the internal iSCSI communications IP address.

  6. Click Next, then click OK.

  7. (Conditional) On the main screen:

    • If you are using SLES 12 SP1 and later, select Network Services > iSCSI LIO Target.

      NOTE:If you do not find this option, go to Software > Software Management > iSCSI LIO Server and install the iSCSI LIO package.

  8. (Conditional) If prompted, install the required software:

    • For SLES 12 SP1 and later: iscsiliotarget RPM

  9. (Conditional) If you are using SLES 12, perform the following steps on all the nodes in the cluster:

    1. Run the following command to open the file which contains the iSCSI initiator name:

      cat /etc/iscsi/initiatorname.iscsi

    2. Note the initiator name which will be used for configuring iSCSI initiators:

      For example:

      InitiatorName=iqn.1996-04.de.suse:01:441d6988994

    These initiator names will be used while configuring iSCSI Target Client Setup.

  10. Click Service, select the When Booting option to ensure that the service starts when the operating system boots.

  11. Select the Global tab, deselect No Authentication to enable authentication, and then specify the necessary credentials for incoming and outgoing authentication.

    The No Authentication option is enabled by default. However, you should enable authentication to ensure that the configuration is secure.

    NOTE:Open Text recommends you to use the different password for the iSCSI target and initiator.

  12. Click Targets and then click Add to add a new target.

    The iSCSI Target will auto-generate an ID and then present an empty list of LUNs (drives) that are available.

  13. Click Add to add a new LUN.

  14. Leave the LUN number as 0, then browse in the Path dialog (under Type=fileio) and select the /localdata file that you created. If you have a dedicated disk for storage, specify a block device, such as /dev/sdc.

  15. Repeat steps 13 and 14, and add LUN 1 and select /networkdata this time.

  16. Leave the other options at their defaults, and click Next.

  17. (Conditional) If you are using SLES 12, click Add. When prompted for Client Name, specify the initiator name you have copied in Step 9. Repeat this step to add all the client names, by specifying the initiator names.

    The list of client names will be displayed in the Client List.

    You do not have to add the client initiator name for SLES 15 and later.

  18. (Conditional) If you have enabled authentication in Step 11, provide authentication credentials.

    Select a client, select Edit Auth> Incoming Authentication, and specify the user name and password. Repeat this for all the clients.

  19. Click Next again to select the default authentication options, then Finish to exit the configuration. Accept if prompted to restart iSCSI.

  20. Exit YaST.

NOTE:This procedure exposes two iSCSI Targets on the server at IP address 10.0.0.3. At each cluster node, ensure that it can mount the local data shared storage device.

39.2.2 Configuring iSCSI Initiators

Perform the following procedure to format the iSCSI initiator devices.

For more information about configuring iSCSI initiators, see Configuring the iSCSI Initiator in the SUSE documentation.

  1. Connect to one of the cluster nodes (node1) and start YaST.

  2. Select Network Devices > Network Settings.

  3. Ensure that the Overview tab is selected.

  4. Select the secondary NIC from the displayed list, then tab forward to Edit and press Enter.

  5. Click Address, assign a static IP address of 10.0.0.1. This will be the internal iSCSI communications IP address.

  6. Select Next, then click OK.

  7. Click Network Services > iSCSI Initiator.

  8. If prompted, install the required software (iscsiclient RPM).

  9. Click Service, select When Booting to ensure the iSCSI service is started on boot.

  10. Click Discovered Targets, and select Discovery.

  11. Specify the iSCSI Target IP address (10.0.0.3).

    (Conditional) If you have enabled authentication in Step 11 in Configuring iSCSI Targets, deselect No Authentication. In the Outgoing Authentication field, enter the user name and the password you configured during iSCSI target configuration.

    Click Next.

  12. Select the discovered iSCSI Target with the IP address 10.0.0.3 and then select Log In.

  13. Perform the following steps:

    1. Switch to Automatic in the Startup drop-down menu.

    2. (Conditional) If you have enabled authentication, deselect No Authentication.

      The user name and the password you have specified in Step 11 should be displayed in the Outgoing Authentication section. If these credentials are not displayed, enter the credentials in this section.

    3. Click Next.

  14. Switch to the Connected Targets tab to ensure that we are connected to the target.

  15. Exit the configuration. This should have mounted the iSCSI Targets as block devices on the cluster node.

  16. In the YaST main menu, select System > Partitioner.

  17. In the System View, you should see new hard disks of the following types (such as /dev/sdb and /dev/sdc) in the list:

    • In SLES 12 SP1 to12 SP5 or later: LIO-ORG-FILEIO

    Tab over to the first one in the list (which should be the primary storage), select that disk, then press Enter.

  18. Select Add to add a new partition to the empty disk. Format the disk as a primary partition, but do not mount it. Ensure that the Do not mount partition option is selected.

  19. Select Next, and then Finish after reviewing the changes that will be made.

    The formatted disk (such as /dev/sdb1) should be ready now. It is referred to as /dev/<SHARED1> in the following steps of this procedure.

  20. Go to Partitioner again and repeat the partitioning/formatting process (steps 16-19) for /dev/sdc or whichever block device corresponds to the secondary storage. This results in a /dev/sdc1 partition or similar formatted disk (referred to as /dev/<NETWORK1> below).

  21. Exit YaST.

  22. (Conditional) If you are performing a traditional HA installation, create a mount point and test mounting the local partition as follows (the exact device name might depend on the specific implementation):

    # mkdir /var/opt/novell
    # mount /dev/<SHARED1> /var/opt/novell

    You should be able to create files on the new partition and see the files wherever the partition is mounted.

  23. (Conditional) If you are performing a traditional HA installation, to unmount:

    # umount /var/opt/novell
  24. (Conditional) For HA appliance installations, repeat steps 1-15 to ensure that each cluster node can mount the local shared storage. Replace the node IP address in step 5 with a different IP address for each cluster node.

  25. (Conditional) For traditional HA installations, repeat steps 1-15, 22, and 23 to ensure that each cluster node can mount the local shared storage. Replace the node IP address in step 6 with a different IP address for each cluster node.