You can expand the text of an error message or have Silk Test Classic find the error messages for you. To navigate from a test plan test description in a results file to the actual test in the test plan, click the test description and select .
There are several ways to move from the results file to the actual error in the script:
Some expanded error messages are preceded by a box icon and three asterisks.
If the error message relates to an application’s behavior, as in Verify selected text failed, Silk Test Classic opens the Difference Viewer. The Difference Viewer compares actual and expected values for a given test case.
When you click a box icon followed by a bitmap-related error message, the bitmap tool starts, reads in the baseline and result bitmaps, and opens a Differences window and Zoom window.
In the Bitmap Tool:
The Bitmap Tool supports several comparison commands, which let you closely inspect the differences between the baseline and results bitmaps.
To evaluate application logic errors, use the Difference Viewer, which you can open by clicking the box icon following an error message relating to an application’s behavior.
Clicking the box icon opens the Difference Viewer’s double-pane display-only window. It lists every expected (baseline) value in the left pane and the corresponding actual value in the right pane.
All occurrences are highlighted where expected and actual values differ. On color monitors, differences are marked with red, blue, or green lines, which denote different types of differences, for example, deleted, changed, and added items.
When you have more than one screen of values or are using a black-and-white monitor, useUpdate Expected Values, described next, to resolve the differences.
to find the next difference. UseYou might notice upon inspecting the Difference Viewer or an error message in a results file that the expected values are not correct. For example, when the caption of a dialog changes and you forget to update a script that verifies that caption, errors are logged when you run the test case. To have your test case run cleanly the next time, you can modify the expected values with the Update Expected Value command.
You might need to use the debugger to explore and fix errors in your script. In the debugger, you can use the special commands available on the Breakpoint, Debug, and View menus.
When a test plan results file shows test case failures, you might choose to fix and then rerun them one at a time. You might also choose to rerun the failed test cases at a slower pace, without debugging them, simply to watch their execution more carefully.
To identify the failed test cases, make the results file active and select
. All failed test cases are marked and test plan file is made the active file.