# Dashboard

<figure><img src="https://209747829-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F4T7DHW2jpYykW9Zaqh7V%2Fuploads%2F61ZBkspgzcePnSwNVYeR%2Fimage.png?alt=media&#x26;token=844c62cf-fc2c-496f-8926-24e61f671ca2" alt="Test Analytics"><figcaption></figcaption></figure>

## Dashboard Actions

**Each Project has it's own Dashboard where Admins can:**

* Follow the Health and Integration for the Project to Optimize.
* Gain insight into the Maturity of their Model for the active Project via the "Maturity Wheel"
* Visibility into the Value Appsurify brings through the "Time Savings" and "Risk Based Test Selection" Graphs.&#x20;

{% hint style="info" %}

* **Repo Bind** - Upon success of Step 2 [Repo Connection](https://docs.appsurify.com/getting-started/step-2-connect-repository) - <mark style="color:green;">Green</mark>
* **Tests Bind** - Upon success of Step 3 [Testsuite Connection](https://docs.appsurify.com/getting-started/step-3-connect-tests)  - <mark style="color:green;">Green</mark>
* **Building Model** - Upon flow of Test Runs - <mark style="color:green;">Green</mark>
  * If any of these Turn <mark style="color:red;">Red</mark>, that means Appsurify has not received Data in the last Three Days and there might be a Connection Issue.&#x20;
  * Once data resumes flow, these will turn back to <mark style="color:green;">Green</mark>.&#x20;
* **Model Maturity Wheel**: Builds as your Model Matures from received Test Run Outputs. Once it hits 100%, your Model will Train over the following weekend and be ready use.&#x20;
  {% endhint %}

## Dashboard Insights

The Dashboard serves as the Central Hub for the project overall. Provides the team with meaningful insights into:

<figure><img src="https://209747829-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F4T7DHW2jpYykW9Zaqh7V%2Fuploads%2FpBYe3VzTmcKqOvkpR2Pd%2Fimage.png?alt=media&#x26;token=149afc7f-e58d-4702-a4f2-a88d5654839a" alt="Test Analytics Time Savings"><figcaption></figcaption></figure>

* **Visual Display of Value** generated from AI-Powered Risk Based Testing via "Time Savings" an "Risk Based Test Selection" Graphs:
  * High water mark serves as the "Before" or "Without" Appsurify.
  * Once Appsurify is enabled, Green lower line serves as the Value generated through Appsurify's Optimization for Time and Test Selection Execution.&#x20;

{% hint style="info" %}
Horizontal Bar above graphs explained:

* "Test Runs" dropdown tab to select the latest 30, 20, 10, 5, or 1 most recent Test Runs for the **TOTAL** of the selected criteria.&#x20;
  * For example, if select Last 5 Test Runs with a testsuite of 100 Tests, Tests Box will display 500 Tests.&#x20;
* Select "Test Suite" dropdown if you are optimizing more than 1 test suite.
* Once these are selected, the following Boxes will display the accumulated results for the total selected criteria.  &#x20;
  {% endhint %}

## Horizontal Boxes Defined:

* **Original Duration**: Total original Time it takes for **ALL TESTS** (100% of Testsuite) to complete.
* **Time Savings -** Total Time Saved through Prioritized Testing
  * Difference between Original Duration and New Duration.&#x20;
* **New Duration -** Total new Time of **optimized** Test Run execution.
* **Tests**: Total number of Tests in testsuite.&#x20;
* **Passed**: Total number of Tests Passed.
* **Failed**: Total number of Tests Failed.
* **Broken**: Total number of Broken Tests or Failed to run&#x20;
* **Optimized**: Total number of Tests Appsurify decided were **irrelevant** given recent Developer Changes and chose **Not To Run** for optimization purposes.&#x20;

{% hint style="info" %}
**Example**:\
**100 Tests** that take **10 Minutes** to complete with **1 Failed Test** in each Ru&#x6E;**.**

Latest **10 Test Runs** tab is Selected and Appsurify is at **90% Optimization**.

* Original Duration: 100 Min
* Time Savings: 90 Min
* New Duration: 10 Min
* Tests: 1000
* Passed: 990
* Failed: 10
* Broken: 0
* Optimized: 900
  {% endhint %}

## Test Run Summary at Bottom Defined

The latest 20 Test Runs are displayed with their individual characteristics on a per run basis:

<figure><img src="https://209747829-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F4T7DHW2jpYykW9Zaqh7V%2Fuploads%2Fsz81RwsIodYiTGabG5TF%2Fimage.png?alt=media&#x26;token=50df3d89-41f2-4e34-a46d-76a38b80d5af" alt="automation test efficiency graph"><figcaption><p>Per Test Run Summary </p></figcaption></figure>

* **Date**: Time stamp
* **Status**: Build either <mark style="color:green;">Passed</mark> or <mark style="color:red;">Failed</mark>
* **Build Name**: Name of the Build for reference.
* **Duration**: Duration of Run (factoring in Optimization or not)
* **Tests**: Total number of Tests in Testsuite (this should not change too much, only if adding tests to testsuite)
* **Passed**: Number of Tests that Passed
* **Failed**: Number of Tests Failed
* **Broken**: Number of Tests Broken or Failed to Run
* **Optimized**: Number of Tests that Appsurify's AI chose not to run due to them being **irrelevant** to the areas where Developers made changes.&#x20;
* **Efficiency**: If Optimization is engaged - at what efficiency was the Test Run executed at.&#x20;
  * For example, if Appsurify is Executing the Top 10% of Tests, Efficiency should hover around the \~90% Efficiency Mark.&#x20;
  * If Appsurify is not yet enabled, Efficiency should hover around \~0%.&#x20;

{% hint style="info" %}
**Example:**

**100 Tests** that take **10 Minutes** to complete with **1 Failed Test** in each Run with Appsurify's **Top 10%** of Tests most likely to Fail:&#x20;

**Duration: \~1 Min**

**Tests: 100**

**Passed: 9**

**Failed: 1**

**Broken: 0**

**Optimized: 90**

**Efficiency:&#x20;**<mark style="color:green;">**\~90%**</mark>
{% endhint %}

## How do we [TRUST these Results](https://docs.appsurify.com/ui-and-value-features/your-model-insights)!
