Technical Docs
  • 🌎Getting Started!
    • Solution Overview
    • Getting Started
    • Sign up and Create Account
    • Step 1: Create Project
    • Step 2: Connect Repository
      • Updated New Script
    • Step 3: Connect Tests
      • Training AI Model
    • Start Prioritized Testing
      • Best Practice: Before Turning On
      • Build Value for CI Pipeline
    • Working Knowledge
      • Smart Test Selection Explained
  • 🥇UI & Value Features
    • Dashboard
    • Your Model Insights
  • Test Run Page
  • Bug Tracking
  • Management
    • Risk Map
    • Configure Alerts
  • 😎Integrations
    • Integrations
  • 🔐Security and Compliance
    • Security & Compliance
    • SaaS or On-Prem
Powered by GitBook
On this page
  • Dashboard Actions
  • Dashboard Insights
  • Horizontal Boxes Defined:
  • Test Run Summary at Bottom Defined
  • How do we TRUST these Results!

Was this helpful?

  1. UI & Value Features

Dashboard

Overview on a Per Project Basis

PreviousSmart Test Selection ExplainedNextYour Model Insights

Last updated 2 years ago

Was this helpful?

Dashboard Actions

Each Project has it's own Dashboard where Admins can:

  • Follow the Health and Integration for the Project to Optimize.

  • Gain insight into the Maturity of their Model for the active Project via the "Maturity Wheel"

  • Visibility into the Value Appsurify brings through the "Time Savings" and "Risk Based Test Selection" Graphs.

  • Building Model - Upon flow of Test Runs - Green

    • If any of these Turn Red, that means Appsurify has not received Data in the last Three Days and there might be a Connection Issue.

    • Once data resumes flow, these will turn back to Green.

  • Model Maturity Wheel: Builds as your Model Matures from received Test Run Outputs. Once it hits 100%, your Model will Train over the following weekend and be ready use.

Dashboard Insights

The Dashboard serves as the Central Hub for the project overall. Provides the team with meaningful insights into:

  • Visual Display of Value generated from AI-Powered Risk Based Testing via "Time Savings" an "Risk Based Test Selection" Graphs:

    • High water mark serves as the "Before" or "Without" Appsurify.

    • Once Appsurify is enabled, Green lower line serves as the Value generated through Appsurify's Optimization for Time and Test Selection Execution.

Horizontal Bar above graphs explained:

  • "Test Runs" dropdown tab to select the latest 30, 20, 10, 5, or 1 most recent Test Runs for the TOTAL of the selected criteria.

    • For example, if select Last 5 Test Runs with a testsuite of 100 Tests, Tests Box will display 500 Tests.

  • Select "Test Suite" dropdown if you are optimizing more than 1 test suite.

  • Once these are selected, the following Boxes will display the accumulated results for the total selected criteria.

Horizontal Boxes Defined:

  • Original Duration: Total original Time it takes for ALL TESTS (100% of Testsuite) to complete.

  • Time Savings - Total Time Saved through Prioritized Testing

    • Difference between Original Duration and New Duration.

  • New Duration - Total new Time of optimized Test Run execution.

  • Tests: Total number of Tests in testsuite.

  • Passed: Total number of Tests Passed.

  • Failed: Total number of Tests Failed.

  • Broken: Total number of Broken Tests or Failed to run

  • Optimized: Total number of Tests Appsurify decided were irrelevant given recent Developer Changes and chose Not To Run for optimization purposes.

Example: 100 Tests that take 10 Minutes to complete with 1 Failed Test in each Run.

Latest 10 Test Runs tab is Selected and Appsurify is at 90% Optimization.

  • Original Duration: 100 Min

  • Time Savings: 90 Min

  • New Duration: 10 Min

  • Tests: 1000

  • Passed: 990

  • Failed: 10

  • Broken: 0

  • Optimized: 900

Test Run Summary at Bottom Defined

The latest 20 Test Runs are displayed with their individual characteristics on a per run basis:

  • Date: Time stamp

  • Status: Build either Passed or Failed

  • Build Name: Name of the Build for reference.

  • Duration: Duration of Run (factoring in Optimization or not)

  • Tests: Total number of Tests in Testsuite (this should not change too much, only if adding tests to testsuite)

  • Passed: Number of Tests that Passed

  • Failed: Number of Tests Failed

  • Broken: Number of Tests Broken or Failed to Run

  • Optimized: Number of Tests that Appsurify's AI chose not to run due to them being irrelevant to the areas where Developers made changes.

  • Efficiency: If Optimization is engaged - at what efficiency was the Test Run executed at.

    • For example, if Appsurify is Executing the Top 10% of Tests, Efficiency should hover around the ~90% Efficiency Mark.

    • If Appsurify is not yet enabled, Efficiency should hover around ~0%.

Example:

100 Tests that take 10 Minutes to complete with 1 Failed Test in each Run with Appsurify's Top 10% of Tests most likely to Fail:

Duration: ~1 Min

Tests: 100

Passed: 9

Failed: 1

Broken: 0

Optimized: 90

Efficiency: ~90%

Repo Bind - Upon success of Step 2 - Green

Tests Bind - Upon success of Step 3 - Green

How do we !

🥇
Repo Connection
Testsuite Connection
TRUST these Results
Per Test Run Summary
Test Analytics
Test Analytics Time Savings
automation test efficiency graph