Chapter 2: Configuration

This chapter describes how to configure Enterprise Server after you have installed it. This involves setting attributes of both the Directory Server and your individual enterprise servers. It also covers performance considerations.

Introduction

When you first start using Enterprise Server, you need to:

  1. Decide on a user account strategy and implement it
  2. Decide what security requirements you have and configure your security options appropriately.
  3. Set other configuration options for the Directory Server.
  4. Set configuration options for your enterprise servers.

The default enterprise server, ESDEMO, is created for you when you install Enterprise Server. This server provides a deployment system service. When you deploy a COBOL service automatically using the Interface Mapping Toolkit, the deployment service receives the service and adds it and its components (operations and packages) to this enterprise server. You might want to configure the deployment service and its associated listener. For more information see the section Deployment Services and Listeners

Your User Account Strategy

When a COBOL program runs as a service in an enterprise server, it inherits its security credentials from the server manager process which started the service execution process. The server manager, in turn, is started by the casstart program. If casstart is run from the command line by an interactive user, then COBOL service programs use that user's security credentials.

More commonly though, Enterprise Server is started using the Micro Focus Enterprise Server Administration, the Web interface to the Directory Server. Enterprise Server Administration (ES Admin) usually runs as a Windows system service (it is listed as Micro Focus Directory Server in the Services Control Panel or in the output from the net start command). Windows system services run under the user account specified in their Startup options. Startup options can be viewed and changed using the Services Control Panel.

When Net Express or Enterprise Server for Windows is installed, MFDS is installed as a system service using the Local System user account, with the Allow service to interact with the desktop option selected. This has two benefits:

However, the Local System account does not have privileges for network file access. That means that COBOL service programs that are running in an enterprise server that was started through MFDS, using the default configuration, are unable to open network files. To enable network file access from your COBOL service programs, use one of these methods:

We recommend that for Enterprise Server on Windows (whether or not your COBOL service programs need network file access), you create a user account specifically for MFDS and the COBOL service programs running under it. Set the permissions on this account appropriately, that is, don't grant it any permissions that the COBOL programs don't need. For additional security, you can set ACLs to grant or deny access to particular objects (directories, files, registry keys) for this user to further control what COBOL service programs can do.

For details of the commands casstart, casstop and mfds, see the reference topics Commands

Firewall configuration

If you have an active firewall on the machine that is running your Directory Server and enterprise servers, and you want remote clients to be able to connect to them, you must ensure that the firewall allows access to the ports that you are using.

For example, Directory Server is configured, by default, to use port 86. Your must configure your firewall to allow TCP and UDP access to this port. Similarly, the default enterprise server, ESDEMO, has a Web Services and J2EE listener that uses port 9003. For remote clients to be able to submit requests to this listener, your firewall must permit access to this port.

We recommend that, if you want remote users to access Enterprise Server functionality through the firewall, you use fixed port values so that you can control access via these.

Note: If you are using Microsoft Windows XP and have installed Service Pack 2, you might find that you cannot see certain Enterprise Server Administration pages, because of firewall restrictions. To get round this, you need to enable access to the ports you are using, for example, 86 for the Enterprise Server Administration Home page and 9003 for the default enterprise server, ESDEMO. Use the Windows Security Center to do this.

Setting Security Options

You can configure security options for both Directory Server and for your enterprise servers. Directory Server security enables you to restrict those who can access Directory Server administration screens and make changes to your Directory Server and enterprise servers. Enterprise Server security enables you to restrict access to applications and to the resources that they use.

We recommend that you carefully consider the the security options that you require and configure your system appropriately.

For more on security and the options available, see Introduction to Enterprise Server Security.

Directory Server

You can configure various attributes of the Directory Server, for example whether enterprise servers are monitored for availability, which events are logged to a journal, how long before inactive clients of the Directory Server are automatically logged off. Detailed advice on these options is available in the help for the Configure Options page.

How to

Enterprise Servers

You can configure various attributes of an enterprise server, for example how much memory is available to it, how many service execution processes it starts up with, what information is traced. Most of these options are fairly straightforward; in any case detailed advice on these options is available in the help for the Add Server and Edit Server pages.

How to...

An enterprise server and the objects it contains (services, listeners, etc) each have a specific Configuration Information field, accessible on the Add and Edit pages for each type of object. Entries in the Configuration Information field use the .ini file format, with section names in square brackets followed by name/value pairs:

[ASection]
name1=value1
name2=value2

Most settings are case-insensitive. The exceptions are:

Information about what can go in this field is in the relevant chapter for that type of object:

The next seven sections provide additional information about some enterprise server attributes.

Shared Memory Area

The shared memory area is an area of memory where an enterprise server stores all the information it needs to run (see the section Enterprise Server Architecture in the chapter Introduction). The shared memory area is measured in pages of 4KB each. The shared memory area has to be big enough to contain all the definitions of the objects in the server, and all the current client requests. If you run out of shared memory while the server is running, the performance of the server will be significantly affected and this could prevent client requests from being processed.

Use the calculation described in this section to estimate the number of shared pages to allocate to an enterprise server.

Actual requirements vary according to the nature of the workload, but you are advised to allow a generous safety margin. Enterprise Server optimizes the use of the shared memory resource by minimizing the number of physical pages used at lower than maximum processing volumes.

Perform the calculations in the middle column of the table below and enter the result of each calculation in the right-hand column. The sum of these answers is the minimum number of bytes required for the shared memory. Divide this number by 4096 and round up to a whole number to get a figure for the number of shared pages needed.

Item Calculation Result
Overhead Fixed size 8192
Shared memory area management Shared memory area size / 4096  
Trace
If auxiliary trace not active Number of entries x 24
If auxiliary trace active Number of entries x 24 x 11
 
Local trace Number of service execution processes x number of entries x 24  
Services Number of services x (128 + service-name-length)  
Service execution processes Number of service execution processes x 144  
Request handlers Number of request handlers x (128+handler-name-length)  
Packages Number of packages x (128+IDT-name-length + application-path-length + module-name-length)  
Resident IDTs Number of resident IDTs x IDT-length  
Client requests Number of client requests being processed at any one time x (256 + average size of client request)  
Clients Peak number of concurrent clients x 64
Access Control Environment Elements Peak number of signed-on users x 428

The number of signed-on users should include the 4 default users used by the system.

 
Total    

where:

service-name-length

is the length of the service name

handler-name-length

is the length of the name of the request handler

IDT-name-length

is the length of the IDT name

application-path-length

is the length of the path to the COBOL application in the package

module-name-length

is the length of the name of the module containing the application

IDT-length

is the size of the IDT file for a package

Shared Memory Cushion

The shared memory cushion is part of the shared memory area. Its function is to handle short-term high demands on the server. When an enterprise server starts, the shared memory cushion is not available for use, and it is never used by incoming client requests. It is used only when a response needs to be passed back to a client, and there is insufficient free memory in the main area to handle the response.

The shared memory cushion is also measured in pages of 4KB each. Set the shared memory cushion to 10% of the size of the shared memory area.

Number of Service Execution Processes

When an enterprise server is created it contains two service execution processes (see the figure Components of a Service Execution Process in the section Service Execution Processes in the chapter Introduction). These handle incoming client requests. When a response has been returned to a client, the service execution process becomes available again to handle another request. If all the service execution processes are busy handling requests, incoming requests have to wait for one to become free. The number of service execution processes in an enterprise server impacts performance. The kind of impact depends on a number of variables. You will need to experiment to find out the optimum number of service execution processes in any particular enterprise server. For more advice see the section Performance Considerations.

You can change the number of service execution processes in an enterprise server while it is running, although any change you make has an effect only until you stop the server. At this point the number of service execution processes reverts to the value you set when you added the server or edited the server details.

How to

Number of Communications Processes

Communications processes handle communications between clients and the enterprise server, and consist of a number of service listeners. When an enterprise server is created it contains one communications process. If there is a communications failure for any reason, the enterprise server cannot process any work until the communications process is restarted. So, to improve the ability of an enterprise server to withstand communications failures, you can create one or more additional communications processes per enterprise server. You create an additional communications process by copying an existing one. The new communications process is an exact clone of the original, except that any fixed port numbers are not cloned. The first communications process in a newly created enterprise server, Communications Process 1, contains a Web Services and J2EE listener and a Web (deployment) listener. If you are creating a new enterprise server, you might want to modify this communications process definition before you copy it, depending on the particular workload you have planned for the server.

How to...

For fault tolerance you need at least two communications processes. You might need more than two to overcome operating system thread limits. The maximum is 32.

For more information about deployment listeners see the section Deployment Services and Listeners. For more information about administering communications processes and listeners see the chapter Communications Processes and Service Listeners.

Environment Variables

When an enterprise server starts, it inherits its environment from the Directory Server. This means that all the environment variables that you need to run your servers' workload, including directory settings for third-party software, must be set in the session where you start the Directory Server.

You can set environment variables for an enterprise server in Configuration Information on the Add Server or Edit Server page. These apply to all services that run in the server, although any environment variables set at the service level when the service was created using the Interface Mapping Toolkit override the settings at the server level.

The format for environment variables is

[ES-Environment]
environment-variable-name=environment-variable-setting

On Windows platforms, use a semi-colon to separate elements within the string. On Unix platforms, use a colon to separate elements within the string. For example:

[ES-Environment]
COBPATH=c:\adirectory;c:\anotherdirectory

Note: We recommend that you do not set the COBDIR environment variable for an enterprise server, especially on UNIX. This is because $COBDIR is used to point to the product location. The results of setting $COBDIR are undefined. Use the COBPATH environment variable if you want to specify the location of service programs at the enterprise server level. If you do set the COBPATH environment variable at enterprise server level, you also need to specify "$COBPATH" in Package Path in the Add Package or Edit Package page.

When you specify an environment variable, you can use the resolved value of another already-created variable as part of the environment variable value. To do this, prefix the environment variable you want to include with a dollar ($) sign, for example:

FILEROOT=d:\data
FILEA=$FILEROOT\mydata.dat

This resolves to d:\data\mydata.dat

If you want to include a dollar sign in an environment variable value as part of the actual value, rather than as an indicator that another environment variable is being referred to, you can enable the dollar sign character to be escaped by inserting a backslash (\) character, for example:

FILEA=\$\$fsserver1\mydata.dat

This resolves to $$fsserver1\mydata.dat

How to..

Fileshare

You will need to set the FHREDIR environment variable to specify the location of the FHREDIR configuration file if any services in your enterprise server make use of Fileshare to access files across the network. For example:

[ES-Environment]
FHREDIR=home/mydir/client.cfg

As the FHREDIR configuration is static, the contents of the FHREDIR configuration file are read at when the enterprise server is initialized and cannot be dynamically changed. This means that the FHREDIR configuration file should contain entries for all services that are deployed, or could be deployed to the enterprise server. If a service is dynamically deployed to an enterprise server and it requires different configuration for Fileshare then there is no option but to stop and restart the enterprise server.

If all your enterprise servers use the same configuration file, you can set $FHREDIR on the command line before you start the Directory Server, rather than have separate configuration entries for each server.

You should start all the Fileshare servers that the services in an enterprise server might need to access before you start the enterprise server. You should stop the enterprise server before you stop the Fileshare servers.

For more information about Fileshare see your Fileshare Guide.

Resource Managers

If your enterprise server includes container-managed services that access databases, you need to specify information about the resource managers that the application container needs to interact with. You can only use resource managers that are XA-compliant. The XA interface describes a structure called a switch table, which lists the names of the xa_ routines implemented in the resource manager. This structure is called xa_switch_t. To access an XA-compliant resource manager, you need a switch module, the purpose of which is to obtain the address of the XA switch data structure implemented by the resource manager.

If you want to access Oracle or IBM DB2 databases from JES-initiated transactions, that is, using IKJEFT01, you must build an additional switch module to use with the standard Oracle or IBM DB2 switch module.

Net Express includes the source for switch modules for Oracle, IBM DB2, SQL Server, a generic one-phase switch module and the additional switch modules for using with JES-initiated transactions in your Net Express installation's base\source\enterpriseserver\xa directory. The source files are:

ESORAXA.CBL Oracle
ESDB2BXA.CBL IBM DB2
ESMSSQL.CBLSQL Server
ESODBCXA.CBLGeneric one-phase commit for ODBC
ESORAOPC.pcoJES-initiated transactions connecting to Oracle data sources
ESDB2OPC.CBLJES-initiated transactions connecting to IBM DB2 data sources

This directory also contains a batch file, build.bat, that you can use to build the switch module you require. The syntax required to run the build command varies slightly depending on the switch module you are building.

Note: If you are running Enterprise Server in standalone mode, build the required switch module in Net Express, and make it available for use with the standalone Enterprise Server.

Oracle

You can build switch modules for several versions of oracle. Execute the build command as follows:

build oracle-version

where oracle-version can be:

ora8 For Oracle 8 databases
ora9 For Oracle 9 databases
ora10For Oracle 10 databases

To connect to Oracle data sources from JES-initiated tasks, you must build an additional switch module. Execute the build command as follows:

build ora1pc

This additional module must be loaded by the main switch module; therefore include the full path to it in your system's PATH environment variable.

DB2

Before building a switch module for DB2, ensure your LIB environment variable contains the path to your DB2 LIB directory. If you want to connect to IBM DB2 data sources using JES-initiated tasks via IKJEFT01, you must build an additional module.

To build a standard IBM DB2 switch module, execute the build command as follows:

build db2

To connect to IBM DB2 data sources from JES-initiated tasks, you must build an additional switch module. Execute the build command as follows:

build db21pc db2_database_alias [db2_userid db2_password]
db2_database_aliasThe database alias cataloged to the DB2 client on the machine where Enterprise Server is running
db2_userid db2_passwordThe user ID and password for connecting to DB2; required only if different from the user ID and password for the user currently logged in

This additional module must be loaded by the main switch module; therefore include the full path to it in your system's PATH environment variable.

SQL Server

Execute the build command as follows:

build mssql

You must define the SQL Server switch module within Enterprise Server. See the topic SQL Server XA switch module for more information.

Note: If you want to access SQL Server data sources using JES-initiated tasks via IKJEFT01, you must also build the Generic one-phase commit XA switch module for ODBC and include the full path to that module on your system's PATH environment variable. For information on building the Generic one-phase XA switch module for ODBC, see the section Generic one-phase commit for ODBC later in this chapter.

Generic one-phase commit for ODBC

Execute the build command as follows:

build odbc

You must define the generic one-phase commit for ODBC switch module within Enterprise Server. See the topic Generic one-phase commit XA switch module for ODBC for more information.

You specify the details of XA resource managers using the Edit Server > Properties > XA Resources page in the Web interface. You can also edit and delete XA resource manager definitions. You can enable resource managers individually to control which are active when an enterprise server starts.

How to...

Performance Considerations

How you configure an enterprise server for performance depends on a number of factors. One obvious factor is the expected workload of the server. You need to consider the frequency with which client requests arrive, and the required speed of response to those requests.

Two more factors are the type of services that run within the server and the type of requests for the services. Services can be categorized as I/O-bound or CPU-bound, while client requests can be categorized as short-running or long-running.

I/O-bound Services

Services that perform a lot of I/O requests are often dormant while they are running, while they wait for the responses to their requests. However, processing I/O requests can sometimes make considerable demands on the CPU. You need to consider whether you might need more service execution processes for this type of service.

CPU-bound Services

Services that do not perform any I/O requests (or very few) are usually constrained by the use they make of the central processor. With CPU-bound services, fewer service execution processes usually work better, because the central processor switches between the running tasks, and this is an overhead.

Short-running Client Requests

Short-running client requests are requests where there is just one interaction between the client and the service; the client request arrives, the service runs, and a response is returned to the client. Requests from Web services clients are always short-running.

With short-running requests, you only need to consider whether or not the the service itself is I/O-bound or CPU-bound.

Long-running Client Requests

Long-running client requests are requests where the same client can make repeated requests of a service and needs data to be preserved between invocations of the service. In Java terms, they are stateful requests. So if the client is a stateful Java bean running in an application server such as WebLogic or WebSphere, the service will run for as long as the bean runs. In these circumstances, you need more service execution processes, even if the service itself is CPU-bound rather than I/O-bound.

If the enterprise server detects that the request is stateful, a new service execution process is started automatically. Dynamic growth of service execution processes consumes machine resources and can affect the performance of the enterprise server. We recommend that, unless your application design actually requires stateful interaction, you keep these types of request to a minimum.

Deployment Services and Listeners

The facility in the Interface Mapping Toolkit to deploy services automatically to a running enterprise server delivers your services using a system service called a deployment service. After installation, the enterprise server ESDEMO has a default deployment service named "Deployer". You can modify this deployment service or create additional deployment services with different configurations. You can also add deployment services to enterprise servers that you create.

How to...

The Deployer service uses a listener called Web, which uses the http-switch connector. If you modify the Deployer service, you will probably need to change the configuration of the Web listener too. You can also create your own listeners for use with deployment services.

How to...

Deployment Services

All deployment services must have their Service Class attribute set to 'MF deployment'. They must be associated with a listener that uses the conversation type 'Web'.

The configuration information for a deployment service looks like this:

[MF client]
 scheme=http
 URL=virtual-directory-name-1/mfdeploy.exe/virtual-directory-name-2
 accept=application/x-zip-compressed

where virtual-directory-name-1 specifies the directory that holds the deploy program, mfdeploy.exe, and virtual-directory-name-2 specifies a directory to contain deployed services.

The values for the scheme and accept parameters must not be changed, but when you are creating a new deployment service you can omit them, as these are the defaults.

The settings for the Deployer service of ESDEMO are:

[MF client]
 scheme=http
 URL=/cgi/mfdeploy.exe/uploads
 accept=application/x-zip-compressed

The configuration of a deployment service must match the configuration of the listener for the service. The URL parameter must specify the same virtual directory name for the location of the mfdeploy.exe program as the listener uses. You can change the virtual path to receive deployed services to any virtual directory name configured for the listener.

For example, if the listener's configuration was:

[virtual paths]
 <default>=/dev/null
 netexpress=<ES>/bin
 project1=c:/development/project1
 project2=c:/development/project2

then you might create a deployment service named "project1" with this configuration:

[MF client]
 scheme=http
 URL=/netexpress/mfdeploy.exe/project1
 accept=application/x-zip-compressed

and services deployed using the "project1" service would be stored under c:\development\project1. You could create a similar service for project2.

Note that the deployment directory must have a .mfdeploy file. Copy the .mfdeploy file from install-dir\base\deploy and modify it as necessary.

Deployment Listeners

The configuration information for a deployment listener consists of the section name "[virtual paths]" followed by a list of URL top-level directories and the paths they will be converted to. For example, the Web listener used by ESDEMO's Deployer service looks like this:

[virtual paths]
 cgi=<ES>/bin
 uploads=<ES>/deploy
 <default>=/dev/null

The "cgi" virtual path is used to specify the location of the mfdeploy.exe program which receives the COBOL archive file being deployed, and the "uploads" virtual path tells mfdeploy.exe where to create the directory for the uploaded COBOL archive file. The special token "<ES>" is translated into the Enterprise Server base installation directory. For example, if Enterprise Server is installed in c:\Programs\NetExpress, then "<ES>/deploy" becomes c:\Programs\NetExpress\base\deploy, the default deployment directory.

Note that only the first directory in the URL specified by the client performing the deployment is checked against the list, and that the entries in the list have to be just a single directory name.

The "<default>" directory is used if the first directory in the URL specified by the client doesn't match any of the entries in the list. The default directory must be one that does not exist, so that when the communications process translates the URL into a full path, the request will fail. This stops any attempt by a client to browse arbitrary directories on the machine. You don't need to specify a default directory, since the communications process uses a safe default anyway.

Here is another example:

[virtual paths]
 <default>=c:/web
 docs=c:/web/documents
 images=d:/media/images

With this configuration, the URL http://host:port/docs/a.html will return the file c:\web\documents\a.html, and the URL http://host:port/images/gif/b.gif will return the file d:\media\images\gif\b.gif.


Copyright © 2008 Micro Focus (IP) Ltd. All rights reserved.