Configuring and Deploying the Kubernetes Cluster

After you run the CDF Installer, complete the following steps to deploy your Kubernetes cluster.

To configure and deploy:

  1. Browse to the Initial Master Node at:
  2. https://{master_FQDN or IP}:3000
  3. Log in using admin userid and the password you specified during the platform installation. (This URL appears at the successful completion of the CDF installation shown earlier.)
  4. On the Security Risk and Governance - Container Installer page, choose the CDF base product metadata version. Then, click Next.
  5. On the End User License Agreement page, review the EULA and select the ‘I agree…’ check box. You might optionally choose to have suite utilization information passed to Micro Focus. Then, click Next.
  6. On the Capabilities page, choose the capabilities and products to install, then click Next.
  7. Some capabilities might require other capabilities as prerequisites. Such requirements are noted in the pull-down text associated with the capability. To show additional information associated with the product, click the > (greater than) arrow.
  8. On the Database page, ensure the PostgreSQL High Availability box is cleared. This database is not used by capabilities in SODP.
  9. Click Next.
  10. On the Deployment Size page, choose a size for your deployment based on your planned implementation.
    • Small Cluster: Minimum of one worker node deployed (each node should have 4 cores, 16 GB memory, 50 GB disk)
    • Medium Cluster:  Minimum of 1 worker node deployed (each node should have 8 cores, 32 GB memory, 100 GB disk)
    • Large Cluster: Minimum of 3 worker nodes deployed (each node should have 16 cores, 64 GB memory, 256 GB disk)
    The installation will not proceed until the minimal hardware requirements for the deployment are met.

    You can configure additional worker nodes, with each running on its own host system, in subsequent steps.

  11. Select your appropriate deployment size, then click Next.
  12. On the Connection page, an external hostname is automatically populated. This is resolved from the Virtual IP (VIP) specified earlier during the install of CDF (--ha-virtual-ip parameter), or the master node hostname if the --ha-virtual-ip parameter was not specified during CDF installation. Confirm the VIP is correct, then click Next.
  13. On the Master High Availability page, if high availability (HA) is desired, select Make master highly available and add 2 additional master nodes. (CDF requires 3 master nodes to support high availability.) When complete, or if HA is not desired, click Next.
  14. For High Availability clusters, the installer prompts you to add additional master nodes depending on your selected deployment size. On the Add Master Node page, specify the details of your first master node and then click Save. Repeat for any additional master nodes.
  15. Master node parameters include:

    • Host: FQDN (only) of node you are adding.
    • Ignore Warnings: If selected, the installer will ignore any warnings that occur during the pre-checks on the server. If deselected, the add node process will stop and a window will display any warning messages. We recommend that you start with Ignore Warnings cleared to view any warnings displayed. You might then evaluate whether to ignore or rectify any warnings, clear the warning dialog, then click Save again with the box selected to avoid stopping.
    • User Name: root or sudo user name.
    • Verify Mode: Choose the verification mode as Password or Key-based, then either specify your password or upload a private key file. If you choose Key-based, you must first specify a username, then upload a private key file when connecting the node with a private key file.
    • Device Type:  Select a device type for the master node from one of the following options.
      • Overlay 2: For production, Overlay 2 is recommended.
      • Thinpool Device: (Optional) Specify the Thinpool Device path, that you configured for the master node, if any. For example: /dev/mapper/docker-thinpool. You must have already set up the Docker thin pool for all cluster nodes that need to use thinpools.
    • Container data: Directory location of the container data.
    • flannel IFace: (optional) Specify the flannel IFace value if the master node has more than one network adapter. This must be a single IPv4 address (or name of the existing interface) and will be used for Docker inter-host communication.
  16. On the Add Node page, add the first worker node as required for your deployment by clicking on the + (Add) symbol in the box to the right. The current number of nodes is initially shown in red.
  17. As you add worker nodes, each Node is then verified for system requirements. The node count progress bar on the Add Node page will progressively show the current number of verified worker nodes you have configured. This progress will continue until the necessary count is met. The progress bar will turn from red to green, which indicates you have reached the minimum number of worker nodes as shown selected in Step 7, above. You might add more Nodes than the minimum number.
  18. Check the Allow suite workload to be deployed on the master node to combine master/worker functionality on the same node (Not recommended for production).

  19. On the Add Worker Node dialog, specify the required configuration information for the worker node, then click Save. Repeat this process for each of the worker nodes you wish to add.
  20. Worker node parameters include:

    • Type: Default is based on the deployment size you selected earlier, and shows minimum system requirements in terms of CPU, memory, and storage.
    • Skip Resource Check: If your worker node does not meet minimum requirements, select Skip resource check to bypass minimum node requirement verification. (The progress bar on the Add Node page will still show the total of added worker nodes in green, but reflects that the resources of one or more of these have not been verified for minimum requirements.)
    • Host: FQDN (only) of node you are adding.
    When adding any worker node for Transformation Hub workload, on the Add Node page, always use the FQDN to specify the Node. Do not use the IP address.
    • Ignore Warnings: If selected, the installer will ignore any warnings that occur during the pre-checks on the server. If deselected, the add node process will stop and a window will display any warning messages. You might start with this deselected in order to view any warnings displayed. You may then evaluate whether to ignore or rectify any warnings, then run the deployment again with the box selected to avoid stopping.
    • User Name: root or sudo user name.
    • Verify Mode: Select a verification credential type: Password or Key-based. Then specify the actual credential.

    Once all the required worker nodes have been added, click Next.

  21. On the File Storage page, configure your NFS volumes.
  22. (For NFS parameter definitions, refer to the section "Configure an NFS Server environment".) For each NFS volume, do the following:

    • In File Server, specify the IP address or FQDN for the NFS server.
    • On the Exported Path drop-down, select the appropriate volume.
    • Click Validate.
    All volumes must validate successfully to continue with the installation.

    A Self-hosted NFS refers to the NFS that you prepared when you configured an NFS server environment. Always choose this value for File System Type.

    The following volumes must be available on your NFS server.

    CDF NFS Volume claim

    Your NFS volume

    itom-vol-claim

    {NFS_ROOT_DIRECTORY}/itom-vol

    db-single-vol

    {NFS_ROOT_DIRECTORY}/db-single-vol

    db-backup-vol

    {NFS_ROOT_DIRECTORY}/db-backup-vol

    itom-logging-vol

    {NFS_ROOT_DIRECTORY}/itom-logging-vol

    arcsight-volume

    {NFS_ROOT_DIRECTORY}/arcsight-volume

  23. Click Next.
Warning: After you click Next, the infrastructure implementation will be deployed. Please ensure that your infrastructure choices are adequate to your needs. An incorrect or insufficient configuration may require a reinstall of all capabilities.
  • On the Confirm dialog, click Yes to start deploying master and worker nodes.