All closed event data files are copied from the primary storage location to the secondary storage location. The original files are retained on primary storage to facilitate faster searches. However, if the primary storage disk space usage nears a user-defined threshold, duplicate data files on the primary storage area are deleted from the primary storage and remain only on secondary storage.
Sentinel supports the following types of storage options:
SAN: The Storage Area Network (SAN) option includes storage that is attached directly to the Sentinel computer. This option provides the best combination of performance, security, and reliability.
CIFS: The Common Internet File System (CIFS) is a native Windows protocol. It is also known as the Server Message Block (SMB) protocol in later implementations. The latest implementation from Microsoft is referred to as SMB 2.
NFS: The NFS protocol requires significant configuration to optimize performance and security, and it is recommended only if you already have a well-established NFS infrastructure in your environment.
If the secondary storage is an NFS server, additional configuration is necessary to ensure that the Sentinel server has the necessary permissions. For more information, see Exporting the Secondary Storage Volume.
WARNING:Only one Sentinel server should be configured to use a particular secondary storage directory (remote share). Configuring the same secondary storage location across multiple Sentinel servers might cause system failure.
The primary storage must use a different partition than the partition that is used for the secondary storage.
The system monitors the disk usage of both primary storage and secondary storage, freeing space on primary storage when it fills up. If both storage locations share the same underlying file system partition, the way in which the partition usage changes as a result of deleting data confuses the system and could result in undesirable behavior.
The event data is first copied to secondary storage rather than moved, because there is an assumption that these are two different disk partitions. If they are in same disk partition instead of being on the different disk partition, the storage usage monitoring is confused by how the usage is changing and could result in undesirable behavior.
You can enable and configure secondary storage for raw data and event data stored on the Sentinel server.
Raw data files are compressed and have the .gz extension. When the data is currently being written into, the raw data file appears with the .open extension.
If secondary storage is configured and enabled, Sentinel copies the compressed raw data files to the configured secondary storage location every 15 minutes.
If secondary storage is enabled, Sentinel moves the closed files to secondary storage every midnight UTC and also whenever the server starts. These files are compressed in the primary storage location, but the file indexes are compressed before moving to the secondary storage. If the secondary storage location is not configured or if there is any problem while moving the closed files, Sentinel attempts to move the files to secondary storage every 60 seconds until it succeeds.
The NFS, CIFS/SMB, and SAN must be configured so that Sentinel has read and write permissions.
For CIFS/SMB and NFS, if multiple Sentinel instances are moving the closed partitions to the same secondary storage location, ensure that each Sentinel instance has its own unique directory on that secondary storage location.
Configuring a SAN/Local directory as a secondary storage location is the preferable configuration for best performance, security, and reliability.
From Sentinel Main, click Storage > Events.
From the Data Storage Location section, select SAN (locally mounted) as the secondary storage location.
In the Location field, specify the local directory path or the location on which the storage area network (SAN) is mounted. You must have the novell permissions to specify the location.
The SAN partition must be manually mounted before the location is specified.
Click Test to check if the write permissions for the specified location are available.
Click Save to configure the specified secondary storage location.
From Sentinel Main, click Storage > Events.
In the Data Storage Location section, select CIFS.
Specify the following information:
Server: Specify the IP address or hostname of the computer where the CIFS server, also known as the SMB server, is configured.
Share: Specify the share name of the SMB or CIFS server. The mounted shares are unmounted when the server stops and are mounted again when the server starts. If the configured share unmounts, the Sentinel server detects this and mounts it again.
Username: Specify the user name (if one is assigned) to access the share.
Password: Specify the password (if one is assigned) to access the share.
Mount Options: Specifies the options that are used while mounting the secondary storage location of the SMB or the CIFS server.
You can specify new mount options. For more information about the available NFS mount options, see the mount.cifs (8) - Linux man page.
The default mount options are file_mode=0660,dir_mode=0770.
(Optional) Click Restore Defaults to restore the default mount options.
Click Test to mount the SMB or CIFS server and to check the write permissions on the server.
Click Save to configure the specified secondary storage location.
NFS servers are fast and efficient. Setting up correctly requires significant configuration and testing. Using an NFS server as a secondary storage location is recommended only when you have a well-established NFS infrastructure in your environment.
You must configure an NFS server with a storage area large enough to accommodate the planned storage needs for Sentinel secondary storage. You need to export (share) this storage directory so that Sentinel can access it. The procedure to export the secondary storage depends on the technology used by your NFS server.
The following are some examples for several common systems:
Identify a volume on the NFS server with sufficient space to hold the Sentinel secondary storage data.
Create a new directory on that volume to store the Sentinel data. For example, /sentinel-secondary.
Create a novell user and novell group on the NFS server with the same user ID and group ID as the corresponding user/group on the Sentinel server. For example, user ID 1000 and group ID 1000. If this is not possible, see Squashing User IDs.
Change the directory ownership to be owned by novell user and novell group:
chown novell:novell /sentinel-secondary
Change the directory permissions to remove the group and other read/write/execute permissions:
chmod og-rw /sentinel-secondary
Export the directory using the appropriate NFS server configuration. You can use a GUI client or refer to the appropriate settings or commands for various popular servers.
Set read and write access for sharing the Sentinel server. List the specific Sentinel server hostname or IP address to restrict access.
Use root_squash (which maps root users who attempt to access the share to an anonymous user ID) to prevent access by root.
You can also explore additional security and performance options, such as async by using TCP, and so on depending on the capabilities of your NFS server.
The following table describes an example of exporting the /sentinel-secondary directory from the nfs-server to the sentinel-server.
System Type |
Configured location |
---|---|
Linux |
Use YaST or add /sentinel-secondary sentinel-server(rw,root_squash) to the /etc/exports file. |
Solaris |
Add /usr/bin/share -F nfs -i sec=sys,rw=sentinel-server,nosuid /sentinel-secondary to the /etc/dfs/dfstab file. |
HP-UX |
Add /sentinel-secondary -access=sentinel-server to the /etc/exports file. |
NetApp |
Add /sentinel-secondary -nosuid,sec=sys,rw=sentinel-server to the /etc/exports file. |
In certain circumstances, it is not possible or desirable to create new user IDs on the NFS server that match the user IDs in use by the Sentinel server. The NFS protocol uses the user ID as an important component for granting permissions to read and write the data during the export. Most NFS servers do not provide flexible and specific ways to re-map user IDs used on the Sentinel system to different user IDs on the NFS server.
An alternate solution involves mapping all source user IDs to an anonymous user ID specified by the NFS server. For example, any user ID that attempts to access the NFS export. This reduces security allowing any user to read or write the Sentinel data on the export (subject to the IP-based access permissions of the export). This is called squashing. In most cases, the root user is re-mapped and most other users are not. In this case, you need to re-map the novell user and all other users.
The following table describes an example of re-mapping the novell user with ID 1000 on the Sentinel server to a local user on the NFS server with the ID 2000 who must have permission to the /secondary-storage directory.
System Type |
Configured location |
---|---|
Linux |
Use YaST or add /sentinel-secondary sentinel-server(rw,all_squash,anonuid=2000) to the /etc/exports file. |
Solaris |
Add /usr/bin/share -F nfs -i sec=sys,rw=sentinel-server,anon=2000,nosuid /sentinel-secondary to the /etc/dfs/dfstab file. If the user ID 1000 is in use on the NFS server, this may not work. In that case use sec=none. |
HP-UX |
Add /sentinel-secondary -access=sentinel-server,anon=2000 to the /etc/exports file If the user ID 1000 is in use on the NFS server, the above may not work. |
NetApp |
Add /sentinel-secondary -nosuid,sec=none,anon=2000,rw=sentinel-server to the /etc/exports file |
You can test the NFS export outside of Sentinel by using the standard Linux mount command to mount the export on the Sentinel server. To do so, log in to the Sentinel server as the root user and enter the following command:
mount -t nfs nfs-server:/sentinel-secondary /mnt
The above command mounts the export on the /mnt directory. You can see the mount in the list by re-issuing the mount command without options. You may not be able to perform any file actions using the root user instead use the novell user (su novell) to perform the file operations. Use umount /mnt command before you attempt to set up the secondary storage within Sentinel.
For more information about NFS security recommendations, see Section 3.0, Security Considerations.
Configure the secondary storage as follows:
From Sentinel Main, click Storage > Events.
In the Data Storage Location section, select the NFS option.
Specify the following information:
Server: Specify the IP address or hostname of the computer where the NFS server is configured.
Share: Specify the share name of the NFS server.
The mounted shares are unmounted when the server stops and are mounted again when the server starts. If the configured share unmounts, the Sentinel server detects this and mounts it again.
Mount Options: Specifies the options that are used while mounting the secondary storage location of the NFS server.
You can also specify new mount options. For more information about the available NFS mount options, see the NFS documentation.
The default mount options are soft,proto=tcp,retrans=1,timeo=60.
(Optional) Click Restore Defaults to restore the default mount options.
Click Test to verify the configuration of the NFS server and to check the write permissions on the server.
This procedures tests a subset of all the settings that are necessary for the NFS server and client.
Click Save to configure the specified secondary storage location.
From Sentinel Main, click Storage > Events.
In the Data Storage Location section, select Change Location. The Change Location option is displayed only if the secondary storage location is configured.
Click Change Location.
Select the option to disable data collection.
You can select this option to avoid filling the primary storage before Sentinel moves the data to the new location. If this option is not selected and if the primary storage is filled before the new data storage location is configured, Sentinel deletes the oldest data to make space for the incoming data.
Configure the new data storage location.
For more information about configuring the NIFS or SMB/CIFS or primary/SAN secondary storage locations, see Configuring Secondary Storage.
Click Save to save the changes and configure the new secondary storage location.
Manually copy the files from the old secondary storage location to the new secondary storage location.
After copying the files, select Copy Done to start data storage at the new location.