Consider the following guidelines when planning to use NSS in a virtualization environment:
NSS pools and volumes are not supported on the Xen host server in a Xen virtualization environment. You can install NSS on the guest servers from inside the guest server environment, just as you would if the guest servers were physical servers.
When you create a virtual machine, you must assign devices to it. If you plan to use the virtualization guest server as a node in a cluster and you need to be able to fail over cluster resources to different physical servers, you must assign SAN-based physical devices to the virtual machine. You create the NSS pools and volumes from within the guest server.
If you install OES Cluster Services in the host server environment, the cluster resources use shared Linux POSIX volumes, and do not use shared NSS pools.
If you install OES Cluster Services in the guest server environment, the guest server is a node in the cluster. The disk sharing is managed by Cluster Services from within the guest server environment. You can use shared NSS pools as cluster resources that run on the guest server and on other nodes in that cluster.
For information about deployment scenarios using shared NSS pools in clusters in a virtualization environment, see Configuring OES Cluster Services in a Virtualization Environment
in the OES 23.4: OES Cluster Services for Linux Administration Guide.
In a Xen virtualization environment, if you need to use RAIDs for device fault tolerance in a high-availability solution, we recommend that you use standard hardware RAID controllers. Hardware RAIDs provide better performance over using software RAIDs on the virtualization host server or guest server.
To get the best performance from a software RAID, create a RAID device on the Xen host and present that device to the guest VM. Each of the RAID’s segments must be on different physical devices. It is best to present the entire physical RAID device or a physical partition of the RAID device to the guest VM, and to not present just a file-backed virtual device.
NSS is not supported to run in the virtualization host server environment, so NSS software RAIDs cannot be used there. Xen supports using Linux mdadm for software RAIDs on the host server.
If you attempt to create and manage a software RAID on the guest server in a production environment, make sure to present different physical devices to the guest VM that you want to use for the software RAID. Using segments from virtual devices that actually reside on the same physical device on the host server slows performance and provides no protection against failed hardware devices. The maximum number of disks that can be presented to the VM is 16 (xvda to xvdp). Xen provides a mechanism to dynamically add and remove drives from a VM.
Using NSS software RAIDs in a virtualization guest server environment has not been tested.
If it is available, use your storage vendor’s multipath I/O management solution for the storage subsystem. In this case, the multiple paths are resolved as a single device that you can assign to a virtual machine.
Do not use multipath management tools in the guest environment.
If a storage device has multiple connection paths between the device and the host server that are not otherwise managed by third-party software, use Linux multipathing to resolve the paths into a single multipath device. When assigning the device to a VM, select the device by its multipath device node name (/dev/mapper/mpathN). The guest server operating system is not aware of the underlying multipath management being done on the host. The device appears to the guest server as any other physical block storage device. For information, see Managing Multipath I/O for Devices in the SLES 12: Storage Administration Guide.
For the best performance on a Xen guest server, NSS pools and volumes should be created on block storage devices that are local SCSI devices, Fibre Channel devices, iSCSI devices, or partitions on those types of devices.
SATA or IDE disks have slower performance because the Xen driver requires special handling to ensure that data writes are committed to the disk in the order intended before it reports.
OES supports file-backed disk images on virtual machines, but does not recommend using them for important data because the volume can become corrupt after a power failure or other catastrophic failure. For example, file-backed volumes might be useful for training and sales demonstrations.
WARNING:Data corruption can occur if you use Xen file-backed disk images for NSS volumes on the guest server in the event of a power failure or other catastrophic failure.
OES kernel has four I/O schedulers available to choose from for custom configuration. They each offer a different combination of optimizations. The four Types of Linux I/O Schedulers are the following:
NOOP Scheduler
Deadline Scheduler
Anticipatory Scheduler
Completely Fair Queuing (CFQ) Scheduler
The NOOP scheduler is the simplest of all the I/O schedulers. It merges requests to improve throughput, but otherwise attempts no other performance optimization. All requests go into a single unprioritized first-in, first-out queue for execution. It is ideal for storage environments with extensive caching, and those with alternate scheduling mechanisms—a storage area network with multipath access through a switched interconnect, for instance, or virtual machines, where the hyperviser provides I/O backend. It’s also a good choice for systems with solid-state storage, where there is no mechanical latency to be managed.
The Deadline scheduler applies a service deadline to each incoming request. This sets a cap on per-request latency and ensures good disk throughput. Service queues are prioritized by deadline expiration, making this a good choice for real-time applications, databases and other disk-intensive applications.
The Anticipatory scheduler does exactly as its name implies. It anticipates that a completed I/O request will be followed by additional requests for adjacent blocks. After completing a read or write, it waits a few milliseconds for subsequent nearby requests before moving on to the next queue item. Service queues are prioritized for proximity, following a strategy that can maximize disk throughput at the risk of a slight increase in latency.
The Completely Fair Queuing (CFQ) scheduler provides a good compromise between throughput and latency by treating all competing processes even-handedly. Each process is given a separate request queue and a dedicated time slice of disk access. CFQ provides the minimal worst-case latency on most reads and writes, making it suitable for a wide range of applications, particularly multi-user systems.
For OES on XEN guest, the default is NOOP scheduler. To improve the I/O scheduler performance, change the default NOOP scheduler to CFQ. Perform the following steps to view and change the I/O scheduler after OES installation:
To view the current scheduler, enter the following command:
cat /sys/block/{DEVICE-NAME}/queue/scheduler
To change the scheduler to CFQ, enter the following command:
echo cfq > /sys/block/{DEVICE-NAME}/queue/scheduler
For example, your device name is sda. To view the scheduler, enter the following command:
cat /sys/block/sda/queue/scheduler
and the output received is the following:
[noop] anticipatory deadline cfq
To change the current NOOP scheduler to CFQ, enter the following command:
echo cfq > /sys/block/sda/queue/scheduler
The optimization in OES can also be achieved during the boot time at a global level. To achieve that, perform the following:
Add the elevator option to your kernel command in the GRUB boot loader configuration file (/boot/grub/menu.lst) and then reboot. For example,
kernel /vmlinuz-2.6.16.60-0.46.6-smproot=/dev/disk/by-id/scsi-SATA_WDC_WD2500YS-23_WD-WCANY4424963-part3 vga=0x317 resume=/dev/sda2 splash=silent showopts elevator=cfq
Using YaST2, edit the optional kernel command line parameter under System > Boot Loader for the booting kernel image or any other kernel image listed and add elevator=cfq. For more information on editing and using the boot loader configuration, see Configuring the Boot Loader with YaST.
Consider the issues in this section for the OES server running in the Xen host environment:
The primary virtual disk (the first disk you assign to the virtual machine) is automatically recognized when you install the guest operating system. The other virtual devices must be initialized before any space is shown as available for creating a pool. Without initializing the devices, no space is shown as available for pool creation. For information, see Section 7.3, Initializing New Virtual Disks on the Guest Server.
Some NSS features are not supported in a Xen guest server environment.
Table 7-1 NSS Feature Support in a Guest Server Environment
NSS Feature |
NSS on Virtualized Linux Environment |
---|---|
Data shredding |
Not supported |
Multipath I/O |
Not applicable; not supported on Linux |
Software RAIDs |
Not tested |