The options that apply to the test run. These are optional, and you can apply one or more of the following:
- -testcase:testcase-name
- Only run the specified test case from the test suite; all other test cases are skipped. Not set by default.
Alternatively, you can specify multiple test cases using one of the following approaches:
- List the test cases using comma as a separator:
-testcase:testcase-name1,testcase-name2,... everything.dll
Or:
- Specify a side file that includes a list of test cases:
-testcase:@filelist.txt everything.dll
Where:
- everything.dll - all your source code compiled
- filelist.txt - the file that includes a list of all test cases to execute
- -ignore-options-in-mfu:{true|false}
- Ignores any options in the specified test fixture file, enabling you to override them with ones specified on the command line.
- -ignore-return-code:{true|false}
- When set to true, the return-code indicating a test pass or fail is ignored, and the test continues running. The default is
false.
- -verbose:{true|false}
- Displays verbose output to the screen. The default is
true.
- -process:{single|separate}
- Use a single process for the entire test run, or multiple separate processes (one for the test runner and one for each test
case within the run). If you are using separate processes, the parent test runner process can log errors and continue with
the test run in the event of a test case failure. The default is
separate.
- -isolate:{true|false}
- Isolates each test case, so that any resources shared between test cases start from their correct initial state for each test
case. The default is
true.
- -report:{printfile|noprintfile|nunit|nonunit}...
- Outputs the test results to a
.txt file (printfile option) or
.xml file (nunit option). The default is to produce a
.txt file. The
NUnit
report takes the form of
<fixture-name>-Result.xml.
Note: The
Micro Focus Unit Testing Framework produces reports compatible with NUnit 2.6.
- -generate-flist:filename
- Outputs a list of failed test names to
filename.
Tip: You can use the generated file to rerun the failed tests by using the
-testcase:@filename
test-suite syntax.
- -report:markdown
- Outputs the test results to a GitHub style markdown format file (.md).
Note: You can use various external utilities such as the
pandoc utility (see
http://pandoc.org/) to convert the markdown file into an
.html or a
.pdf file. Here are some example commands for converting the
.md file when you use pandoc:
pandoc -thtml5 -s -S --toc -c pandoc.css -fmarkdown_github mfumeta.md -o mfumeta.html
Or
pandoc -s -S --toc -c pandoc.css -fmarkdown_github mfumeta.md -o mfumeta.pdf
- -report:timings-csv
- Generates a
.csv file containing the timings for the test run.
- -report:trx
- Generates a Microsoft Visual Studio test results file (TRX). This format uses the test-suite name as the base name and an
extension of
.trx.
- -reportfile:filename
- Changes the name of the report file. It defaults to
<test suite-name>-report.txt.
- -nunit-namespace:namespace
- Prefixes the results of each test case with a
namespace when reports are produced in NUnit format. Not set by default.
- -generate-app-exe-ep:entry-point
- When running tests against an executable file, you need to generate a replacement main entry point for your executable that
is capable of calling the
Micro Focus Unit Testing Framework. This command generates a source file,
mfunit_application_entrypoint.cbl, that contain such an entry point. Build this into your executable so that it can be run against a set of tests when executed
within the framework. When run outside of the framework, this entry point is not called, and the executable runs as normal.
- -application-exe:executable-file
- Runs tests against an executable file. The executable must have been re-built to include the
mfunit_application_entrypoint.cbl produced using the option above.
- -outdir:directory
- Specifies a directory for the reports. If not specified, the reports are created in the directory in which the tests were
run.
- -trait:trait-name(s)
- Specifies that only test cases with the specified trait name should be run (and all other test cases skipped). You can specifiy
more than one trait, separated by a comma, if required. You set a trait for a test case by using the
MFU-MD-TRAITS variable in your source code, or specifying the
traits=trait-name in the text fixture file.
-
Tip: You can combine this option with the
-generate-mfu option to create a test fixture file designed to only run test cases with a particular trait; for example:
mfurun -generate-mfu -trait:smoke test-suite.dll
- -jenkins-ci
- Enables support for producing test results that will be used on a Jenkins CI server.
- -jenkins-ci:junit-attachments
- Enables support for producing test results that will be used on a Jenkins CI server, and also enables the Jenkins plugin for
JUnit formatted results.
- -diagnostics-color:off|ansi|jenkins|windows10
- Enables colorisation of certain elelments of the output. Set the value to the intended viewing platform. The default is
off. See
Using Color in Test Reports for more information.
- -silk-central
-
Outputs the results to an .xml file that is compatible with Silk Central. The
output.xml file, along with any required assets (log and dump files), are placed in the location determined by the SCTM_EXEC_RESULTSFOLDER
environment variable, where they can be picked up and processed by Silk Central (refer to your
Silk Central Help for more information on
ProcessExecuter Tests). If you specify this option when your Silk Central environment is not correctly configured, you receive an error.
- -jit:{core|debug}
- Produces a core dump file or invokes just-in-time debugging when a test case errors.
Note: This option has no effect when asserting a test failure.
- -debugbreak
- Immediately starts the debugger.
- -debugstart:id
- Issues a start/wait command to the debugger using the specified
id.
- -csv-line-filter:line-number
testcase-name
- For use when executing data-driven tests for a data source in CSV format. Use
line-number to specify the only line of your data source to use when running
testcase-name. This can be particularly useful when combined with the -debugbreak and -debugstart options to focus in on a particular piece
of data.
- -generate-csv-snippet:testcase-name
data-source
- Creates a test case that includes the basic structure for a data-driven test. It reads
data-source (currently limited to CSV files) and creates the necessary data items required to test the data.
Note: Windows users: you can directly pipe the code snippet to the clipboard by appending
| clip to the command.
- -generate-mfu
filename
- Produces a skeleton test fixture file (.mfu file) from the test cases within
filename (a
.dll file or
.so file).
- -high-res-timer:{true|false}
- When set to true, a native high-resolution performance counter is used to report the duration of the test run. The default
is false.
Note: High-resolution timings cannot be displayed in the junit or Silk Central
.xml formats; normal timings are always displayed.
- -es-server-name:server-name
- Specifies the enterprise server region name to which the JCL job card will be submitted. Use this option in conjunction with
-es-use-mfcc:true to emulate the
cassub /l command, or in conjunction with
-es-use-mfcc:false to emulate the
cassub /r command.
Note: The
-es-server-name and
-es-service options are mutually exclusive.
-
Tip: Set the MFUNIT_ES_SERVER_NAME environment variable, and then omit this command line option, to avoid having to re-enter the
same information for multiple test runs.
- -es-service:service-endpoint
- Specifies the
service-endpoint to which the JCL job card will be submitted. This takes the form of
<protocol>:<address>:<port> (for example
tcp:localhost:9023). Use this option to emulate the
cassub /s command.
Note: The
-es-server-name and
-es-service options are mutually exclusive.
-
Tip: Set the MFUNIT_ES_SERVICE environment variable, and then omit this command line option, to avoid having to re-enter the same
information for multiple test runs.
- -es-use-mfcc:{true|false}
- Use the Micro Focus Common Client (mfcc) for the
server-name lookup. Use this option in conjunction with
-es-server-name:server-name.
-
Tip: Set the MFUNIT_ES_USE_MFCC environment variable, and then omit this command line option, to avoid having to re-enter the same
information for multiple test runs.
- -es-syscat:catalog
- Specifies the location and name of the system catalog file (catalog.dat). You must specify this option with either of the JCL submission options (-es-server-name:server-name or
-es-service:service-endpoint).
-
Tip: Set the MFUNIT_ES_SYSCAT environment variable, and then omit this command line option, to avoid having to re-enter the same
information for multiple test runs.
- -es-userid:user
- If the enterprise server region indicated by
-es-service or
-es-server-name has security enabled, use
-es-userid to specify valid user credentials. This argument emulates the
cassub /u command.
-
Tip: Set the MFUNIT_ES_USERID environment variable, and then omit this command line option, to avoid having to re-enter the same
information for multiple test runs.
- -es-password:password
- If the enterprise server region indicated by
-es-service or
-es-server-name has security enabled, use
-es-userid to specify valid user credentials. This argument emulates the
cassub /p command.
-
Tip: Set the MFUNIT_ES_PASSWORD environment variable, and then omit this command line option, to avoid having to re-enter the same
information for multiple test runs.
- -es-auth-group:group
- If the enterprise server region indicated by
-es-service or
-es-server-name has security enabled, use
-es-userid to specify valid user credentials. This argument emulates the
cassub /c command.
-
Tip: Set the MFUNIT_ES_AUTH_GROUP environment variable, and then omit this command line option, to avoid having to re-enter the
same information for multiple test runs.
- -es-jes-level:level
- Specifies the JES version in effect, and if omitted, the
level defaults to 2. This argument emulates the
cassub /v command.
-
Tip: Set the MFUNIT_ES_JES_LEVEL environment variable, and then omit this command line option, to avoid having to re-enter the
same information for multiple test runs.
- -es-jcl-range-retc:range
- The range of return code values that determines the test failure; if the return code falls within the range, the test is marked
PASS, otherwise it's marked FAIL. Specify a range using
nnn-nnn; for example,
-es-jcl-range-retc:0-7.