The harness does not automatically create reports of test results after a test run. You must create test reports either from the harness GUI or from the command line in batch mode (see Writing Reports in the Command-Line Interface User's Guide).
Reports might contain configuration values as well as test results. To ensure that any configuration values printed in reports are those that were used to run the tests, observe the following precautions after running tests for which you intend to generate one or more reports:
To minimize the chance of creating reports with configuration values that are inconsistent with the test results, either create reports after running the tests or use different work directories for different configurations.
To
create a test report from the harness GUI, complete these steps:
You can either specify a new directory or an existing directory. If you ran reports earlier, the Report Directory field displays the directory from the previous run. If you use an existing report directory, the harness can save the previous reports as backups when it writes the new reports. Use the settings in the Backup Options pane to backup old reports and specify the number of backup reports to keep in the report directory.
See Using View Filters for a description of the filters.
See Custom View Filter for a description of how to create a custom view filter.
When "Backup old reports" is enabled, the harness saves the previous reports by appending a tilde and a sequential number to the
.html
extension (such as failed.html~1~
). The harness maintains
the specified number of copies by deleting the oldest copy when the limit of
backups is reached for a specific report. Changing this or any other setting
in the report dialog box does not alter any previously saved backup reports.
Existing backups are not deleted if backups are turned off and file names are
not changed as old backups are deleted.
The harness backs up the entire set of reports. If the first set of reports included HTML, Text, XML, and COF formats, and the next set of reports are generated in XML format only, the harness creates backups of the previous reports (appending the appropriate tilde and sequence number to the backup file name). The harness generates the new top-level index.html
file and updates its hyperlinks to maintain a self-consistent backup report set.
HTML Report: An HTML report showing configuration information, a result summary, and environment values. See HTML Options and KFL Options.
Plain Text Report: A text report showing only the test names and their result status.
XML Report: An XML report containing configuration information, a result summary, and all result data for each test. This report is versatile - it can be used as input to Merge Reports and can be converted to other report formats. However, the file can be larger and slower to generate than other formats.
COF Report: The report browser displays a cof.xml
hyperlink to the COF report page, which contains a report in the COF format. COF is an extensive XML format containing environment, configuration and test result data.
The harness writes the reports and displays a dialog box that gives you the option of either viewing the new reports in the report browser or returning to the Test Manager window.
If you selected HTML Report in the Report Formats list, use the HTML Options tab to select the sections of the main HTML report file that are generated.
The following options are available for generating HTML reports:
The HTML report provides hyperlinks to content in the other HTML files. If it is not selected, the hyperlinks are not generated.
Use the options on this tab to specify the content and the location of HTML reports. The main report is the home page for browsing HTML reports. It can be named report.html
or index.html
. The main report contains hyperlinks to the extra files you choose to generate. The extra files you can choose are as follows:
passed.html
): Tests that were executed during the test run
and passed. failed.html
): Tests that were executed during the test run
but failed. error.html
): Tests that had errors and could not be run. notrun.html
): Tests that were not excluded from the test run
but were not run. The main report and any extra files are written to the location you specified in the Report Directory field.
Known failure list reports enrich the reporting output so you can monitor the status of certain tests across test runs. This section describes how to create and use a known failures list and discusses KFL reports and the Known Failure Analysis.
A known failures list (KFL) is list of tests or test cases that are known to fail. Its purpose is to provide failure data for reporting, so that failure behavior can be tracked over time. Using a known failures list is optional and the feature is off by default. To activate the feature, answer yes to the "Specify a Known Failures List?" question in the configuration editor.
Once KFL files have been specified, (see Creating a Known Failures List and Specifying a Known Failures List) you can choose the Known Failures option in the HTML Options tab. This enables Known Failure Analysis options on the KFL Options tab. Options you check on this tab are preserved as preferences for future reports. This is true if reports are launched from the user interface or the command line.
Reporting Options:
Optional files/sections to generate:
These options are set to Yes by default.
kfl_fail2fail.html)
.kfl_fail2error.html)
.Any generated data is added to the Known Failure Analysis which is found towards the end of the HTML report (report.html
and/or index.html
). It is a summary of various comparisons between the selected result set (Current Configuration, Last Test Run, etcetera) and the items listed in the KFLs provided in the configuration.
The KFL is a text file with the extension .kfl
.
The KFL file name or its path must not begin with a dash "-" or contain spaces. As decribed in
Specifying Known Failures Lists With kfl
in the Command-Line Interface User's Guide, a space is a separator on the command line, therefore file path arguments such as C:\Program Files\myconfig\foobar
do not work.
The .kfl
file lists one test or test case per line. You have the option of specifying a bug ID following the test name (a space separator is required). Use # for comments. For example:
# Demo.kfl lists/DoublyLinkedList/insertTest.html 0000123 BigNum/subtractTest.html 0000456 BigNum/compareTest.html 0000789
If you provide a bug prefix URL, that ID will be appended to that address, creating a convenient link in your report. To specify the bug prefix URL, select File > Preferences. Under Test Manager, select Reporting, and specify the URL.
This feature does not validate the URL or perform any processing.
Follow these steps to use one or more known failure lists to add failure analysis data to HTML reports.
If you have specified KFL files in the configuration editor it's preferable to modify the list using the configuration editor. If you want to modify the list of KFL files from the command line, see Specifying Known Failures Lists with kfl
in the Command Line Interface User's Guide.
If you specified a known failures list in the configuration editor, the HTML report will include a section titled Known Failure Analysis. When you create a report you can check the HTML Option "Known Failures" to create the New Passed, Missing (not run), and New Failures reports. The options on the KFL Options tab generate reports for New Passed, Missing (test not found) and Old Failure.
The Known Failure Analysis table in the HTML report attempts to describe the differences found in the set of results being analyzed versus what you provided in the KFL. The KFL is the set of expected failures of tests and test cases; if they did not fail in the current set of results, that is considered a discrepancy. This is a sample table:
The headers Tests (#) and Test Cases (#) contain a number representing the number of discrepancies found in that column. The numbers below that header should add up to that number (except the Old Failure does not count as a discrepancy).
The Known Failure Analysis table contains numbers linked to each of the report categories below. If the number is clicked, the user is taken to a file which has details - basically the information you'd find in the KFL file. The test name or test case name and the associated bug IDs. The test names are hyperlinked to the JTR result file in the work directory and the bug ids are hyperlinked if the bug URL preference is set in the Test Manager Preferences (as described in Reporting).
A test or test case in the current set of results has a Passed status, when it was expected to fail.
Sample scenario:
abc123
whenever the tests are run against the product, and has put test abc123
on the known failures list (KFL).
abc123
now passes because the defect has been fixed.
abc123
listed and investigate the reason this test is suddenly passing.
abc123
from the KFL.A test or test case in the current set of results has an Error state, but it was expected to Fail. An Error usually indicates some sort of problem executing the test. Investigate items in this category to see why they resulted in an Error rather than a simple Pass or Fail.
For some reason the test on the KFL was not run in the set of results being reported. It may not indicate a problem if a partial set of results is being analyzed, or if the KFL contains a wide set of tests, all of which would never appear in a single set of results.
The results for a test listed in the KFL are missing. Because the test was listed in the KFL, it was expected to exist and Fail. A missing test may not indicate a problem, but should be investigated. This section can be disabled by deselecting the appropriate option in the HTML KFL report options in the Create Report dialog.
Any test or test case which has a Failed status in the current results but does not appear in the KFL.
The lists of tests and test cases which failed and were expected to fail. This is not a discrepancy and is provided for informational purposes.
The possible contents of the report directory are as follows:
index.html /html config.html env.html error.html excluded.html failed.html kfl_fail2error.html kfl_fail2fail.html kfl_fail2missing.html kfl_fail2notrun.html kfl_fail2pass.html kfl_newfailures.html kfl_tc_fail2error.html kfl_tc_fail2missing.html kfl_tc_fail2notrun.html kfl_tc_fail2pass.html kfl_tc_newfailures.html notRun.html passed.html report.css report.html /text summary.txt /xml report.xml
In the html
directory, the KFL report file names correspond to the KFL reports as follows:
HTML Report | File Names |
---|---|
New Passed | kfl_fail2error.html , kfl_tc_fail2error.html |
Unexpected Error | kfl_fail2fail.html , kfl_tc_fail2fail.html |
Missing (Not run) | kfl_fail2failmissing.html , kfl_tc_fail2failmissing.html |
Missing (test not found) | kfl_fail2notrun.html , kfl_tc_fail2notrun.html |
New Failure | kfl_fail2pass.html , kfl_tc_fail2pass.html |
Old Failure | kfl_fail2newfailures.html , kfl_tc_fail2newfailures.html |
Copyright © 2002, 2011, Oracle and/or its affiliates. All rights reserved.