Test Run Page
Individual Test Run Details
Test Run Page
Once Test runs begin flowing, they will populate in here and all runs will remain in here historically. For the Dashboard and Your Model Insights pages, only the most recent 20 Test Runs are displayed.
Here you'll have the opportunity to dig into each test run for more details about the run, such as:
Test Results on each Run (Passed, Failed, Broken, Not Run)
Edit Test Suite Test Execution
Test Analytics Per Run
Groups Test Failures by Underlying Defect
Value: Pulls together all Failed Tests that were caused by a single underlying Defect for easy management.
Test Runs and Results
Provides standard information and results on individual test runs for Test Suite name, Test Run Name, Project, Start Time, and Results.
Edit Test Suite Execution
Dive into each test run and have the ability to determine how you want the Test Suite Executed per Appsurify. By default
Default Settings for Test Execution:
Number of Reruns: Set to 1
Can change based on preference.
Rerun Flaky:
Set to "True"
Option to Turn off, "False"
Auto Raise Defect
Set to "True"
When a test fails or group of tests fail, it raises an underlying defect for each unique failure.
Auto Close Defect
Set to "True"
When a test passes and the Defect has been fixed, the Defect will be auto Closed.
Report after Failure
Set to "True"
Test Analytics per Run
Dive into the granular results of each test run and see grouped Test Failures by underlying defect.
Click over Test results to go into Detail about that Run
In the below example, there were 4 Test Failures:
Group Test Failures by Underlying Defect
Consolidate Test Failures by the underlying defect that caused multiple failures for easy bug fixing.
If One bug caused Multiple Tests to Fail, teams just want to find that ONE underlying bug as quickly as possible!
Test Levels Defined
Tests are prioritized based on the likelihood of them Failing and Finding a Defect
They are prioritized into the following buckets:
High – Tests which are very likely to fail, initially those associated with the files being changed
Medium – Tests that may fail, initially those that are associated with either the areas being changed or dependencies.
Low – Tests that are unlikely to fail.
Unassigned – Tests which we have no information on, either they have not manually been linked to any area/file/folder or they have not failed.
Rerun – Tests that should be rerun
When selecting which tests to run the tests can either be for a single commit or for a set of commits.
Last updated