Sentinel requires data to be on a storage system that supports random access, such as data on your typical hard drive. It does not support directly interfacing with the data stored on tape.
You can search the raw data by using tools such as egrep or a text editor, but this search might not be sufficient for your requirements. The search mechanism provided by Sentinel on event data is more powerful than these tools.
The high-level approach to configure Sentinel is to retain data for a longer duration so you can perform searches and run reports on the data you regularly need to access, and to copy the data to tape before Sentinel deletes it. To search or run reports on data that was copied to tape, but deleted from Sentinel, copy the data from the tape back to Sentinel.
There are two types of data in Sentinel: raw data and event data.
If you want to perform searches or reports on the data, copy both the raw data and the event data to tape so that you can copy both sets of data back into Sentinel when the data is needed. If you want to store data only to comply with legal requirements, copy only the raw data to the tape.
Events should be moved to secondary storage regularly. The following types of data can be backed up in Sentinel:
Configuration Data: This option includes non-event or raw data backup. It is faster because it contains small amount of data, including all the installation directories except the data directory.
Data: This option backs up all the data in the primary storage and secondary storage directories. This option takes a longer time to finish.
Secondary storage directories can be located on a remote computer.
Best practices for data backup include the following:
Periodically export all the Event Source Management configurations and save them. When the environment is relatively stable, you can generate a full Event Source Management export including the entire tree of the Event Source Management components. This action captures the plug-ins and the configuration of each node. You must back up the resulting .zip file and move it to secondary storage.
If changes such as updating plug-ins or adding nodes are made to Event Source Management later, you must export the configuration and save it again.
Back up the entire installation directory so there is no risk of manual mistakes and the process is quicker.
You should configure primary and secondary storage space to store data before the data is deleted from the Sentinel server. While configuring the storage space, ensure that your storage system is not 100% utilized to avoid undesirable behaviors such as data corruption. Additionally, you should also have additional space in your secondary storage to copy data from tape back into Sentinel. You can do this by decreasing the archive utilization setting.
You can configure the duration for the data to remain on the disk before it is deleted. If your hard drive storage space is not sufficient to store data long enough to meet your legal requirements, you can use tape storage to store data beyond the specified duration.
You must configure data retention policies so that the data that you want to search and report is retained within the Sentinel server until you no longer need it. Additionally, a data retention policy should ensure that Sentinel is not prematurely deleting the data because of storage utilization limits. If the storage utilization limit is exceeded and you notice that the data is being prematurely deleted, change the data retention policy to expand the data storage space.
You can set up a process to copy raw data and event data to tape, depending on the data that you need.The following sections describe how each type of data is stored in Sentinel so that you can set up copy operations to copy the data out of Sentinel onto tape.
Raw data partitions are individual files. They are created every hour. Raw data files are compressed and have the.gz extension.
The directory hierarchy in which the raw data files are placed is organized by the event source and the date of the raw data. You can use this hierarchy to periodically copy a batch of raw data files to tape. For more information on raw data directory hierarchy, see Table 13-1, Raw Data Directory Structure.
You cannot copy files that are in the process of being compressed. You must wait until the raw data files are compressed and moved to secondary storage before copying them to tape.The presence of a .log file with the same name as the zip file indicates that the file is still in the process of being compressed.You must also ensure that the raw data files are copied to the tape before the interval configured in the Raw Data Retention policy expires so that the data is not lost.
Event data partitions are created every 24 hours. Event data is stored in the data/eventdata directory with subdirectory names prefixed with the year, month, and day when the partition was created (yyyymmdd). For example, the path to a complete event data partition, relative to the installation directory, is data/eventdata/20090101_408E7E50-C02E-4325-B7C5-2B9FE4853476. You can use this hierarchy to know when a partition is closed. Subdirectories whose date is at least 48 hours old should be in the closed state.
For more information about the event data directory hierarchy, see Table 13-3, Event Data Directory Structure.
You should wait until event data partitions have been copied to secondary storage before copying them to tape. Before you copy, ensure that the directory is not currently being copied from primary storage. To do this, see if there is a primary storage directory partition of the same name. If the corresponding primary storage directory partition is not present, the secondary storage directory partition is not being copied. If the corresponding primary storage directory partition is still present, sure that all of the files in the primary storage directory partition are also in the secondary storage directory partition and that they are all of the same size. If they are all present and of the same size, it is highly likely that they are not currently being copied.
The event data restoration feature enables you to restore old or deleted event data. You can also restore the data from other systems. You can select and restore the event partitions in the Sentinel Main interface. You can also control when these restored event partitions expire.
NOTE:The Data Restoration feature is a licensed feature. This feature is not available with the free or trial licenses. For more information, see Understanding License Information
in the Sentinel Installation and Configuration Guide.
To enable event data for restoration, you must copy the event data directories that you want to restore to one of the following locations:
For primary storage, you can copy the event data directories to /var/opt/novell/sentinel/data/eventdata/events/.
For secondary storage, you can copy the event data directories to /var/opt/novell/sentinel/data/archive_remote/<sentinel_server_UUID>/eventdata_archive.
To determine the Sentinel server UUID, perform a search in the Web interface, in the Search results, click All for any local event. The value of the SentinelID attribute is the UUID of your Sentinel server.
From Sentinel Main, click Storage > Events.
The Data Restoration section does not initially display any data.
Click Find Data to search and display all event data partitions available for restoration.
The Data Restoration table chronologically lists all the event data that can be restored. The table displays the date of the event data, the name of event directory, and the location. The Location column indicates whether the event directory was found in the primary storage directory of Sentinel or in the configured secondary storage directory.
Continue with Restoring Event Data to restore the event data.
Select the check box in the Restore column next to the partition that you want to restore.
The Restore Data button is enabled when the Data Restoration section is populated with the restorable data.
Click Apply to restore the selected partitions.
The selected events are moved to the Restored Data section. It might take approximately 30 seconds for the Restored Data section to reflect the restored event partitions.
(Optional) Click Refresh to search for more restorable data.
To configure the restored event data to expire according to data retention policy, continue with Configuring Restored Event Data to Expire.
There may be a scenario where the secondary storage data if the novell user ID (UID) and the group ID (GID) are not the same on both the source (server that has the secondary storage data) and destination (server where the secondary storage data is being restored). In such a scenario, you need to unsquash and squash the squash file system.
To unsquash and squash the file system:
Copy the partition that you want to restore on the Sentinel server where you want to restore the data at the following location:
/var/opt/novell/sentinel/data/archive_remote/<sentinel_server_UUID>/eventdata_archive/<partition_ID>
Log in to the Sentinel server where you want to restore the data, as the root user.
Change to the directory where you copied the partition that you want to restore:
cd /var/opt/novell/sentinel/data/archive_remote/<sentinel_server_UUID>/eventdata_archive/<partition_ID>
Unsquash the index.sqfs file:
unsquashfs index.sqfs
The index.sqfs file is unsquashed and the squashfs-root folder is created.
Assign permission for novell user and novell group to the <partition_ID> folder:
chown -R novell:novell <partition_ID>
Remove the index:
rm -r index.sqfs
Switch to novell user:
su novell
Squash the squashfs-root folder:
mksquashfs squashfs-root/ index.sqfs
Restore the partitions. For more information, see Restoring Event Data.
The restored partitions do not expire by default, according to any data retention policy checks.To enable the restored partitions to return to the normal state and also to allow them to expire according to the data retention policy, select Set to Expire for data that you want to expire according to the data retention policy, then click Apply.
The restored partitions that are set to expire are removed from the Restored Data table and returned to normal processing.
It might take about 30 seconds for the Restored Data table to reflect the changes.