You can use the statistics utility program, casfhsf, to output aggregated data from the .csv files created by the Historical Statistics Facility (HSF). casfhsf outputs this data as a comma-separated value file.
You can use the information to review system performance on a regular basis and avoid going through a large volume of data in order to determine if corrective tuning action is required.
From the Visual COBOL command prompt, launch casfhsf.exe.
The following switches are available. A dash may replace the slash, and switches can be upper or lower case.
Switch | Description |
---|---|
/A | Process only cashsf-a.csv. |
/B | Process only cashsf-b.csv. |
/C | Process both cashsf-a.csv and cashsf-b.csv (default). |
/D | Process only backup files cashsf.nnn. |
/E | Process all files: cashsf-a.csv, cashsf-b.csv, and cashsf.nnn. |
/S[sd,st,ed,et] | Aggregate the output data by the second. Optionally, you can specify a start date (sd), start time (st), end date (ed), and end time (et). Where [sd,st,ed,et] = yyyymmdd,hhmmss,yyyymmdd,hhmmss |
/M[sd,st,ed,et] | Aggregate the output data by the minute (default if error). Optionally, you can specify a start date (sd), start time (st), end date (ed), and end time (et). Where [sd,st,ed,et] = yyyymmdd,hhmm,yyyymmdd,hhmm |
/IP | The location of the input files. The default value is the current directory. |
/OP | The location of the path of the output file. The default value is the current directory.
The output file is called OutFile.csv. |
To aggregate certain transactions or programs, you may specify these using parameters in the form:
{type},{id}[[,{type},{id}]...]The parameters are:
You can specify between zero and five type,id pairs. Each type must be separated from its corresponding id by a comma. If more than one pair is specified, each pair must be separated by a comma.
If no pair is specified, an aggregate of all transactions is accumulated in the first unallocated slot. The default behavior in this case is to aggregate all data in slot one.
An extfh.cfg file is generated in the current directory, if one is not already present, to allow large file inputs.
If no switches or parameters are specified, the utility will ask if you wish to continue using the default values.
casfhsf /op"c:\Users\All Users\Micro Focus"Using the input defaults, this processes both cashsf-a.csv and cashsf-b.csv in the current directory, aggregates times in minutes for all transactions and programs, and creates the output file in the folder c:\Users\All Users\Micro Focus.
time,TPS,Latency,min,max,Response,min,max,System,min,max,API,min,max,SQL,min,max,IMS,min,max...After the time field, the rest of the fields occur five times.
Field | Description |
---|---|
time | The time period in minutes (hh:mm format) or seconds (hh:mm:ss format) into which the input monitoring times are aggregated. The format is determined by the /S or /M switch. |
TPS | The average number of transactions per second or minute. |
Latency,min,max | The average latency - delay or waiting time - for this time period, and the minimum and maximum latency. |
Response,min,max | The average time taken for Enterprise Server to respond to the transaction request, and the minimum and maximum response times. |
System,min,max | The total of the average latency and response times (Latency + Response) , and the minimum and maximum totals of latency plus response time of a particular transaction run. |
SQL,min,max | The average time, in hundredths of a second, spent in SQL API (EXEC SQL statements) for this task, and the minimum and maximum times. |
There is also a secondary header depending on the parameters supplied to the utility.
If there are no parameters, then all the aggregate values are accumulated under "Everything", and the other four columns are marked as "Unused".
#HSFVer=03;Custom=xx;CicsFiles=xx;TSQ=xx;TDQ=xx
where xx corresponds to the number of fields for each type, as specified by the ES_HSF_CFG environment variable.
#HSFVer=01 or 02
If you are processing multiple files, casfhsf will only aggregate data from files of the same version as the first file it receives. Other versions are ignored. However, whatever version it processes, the output file OutFile.csv will contain all the fields listed above, with values of zero for any missing fields.