Smart Test Selection Explained
Why do we choose some Tests and not others?
Last updated
Was this helpful?
Why do we choose some Tests and not others?
Last updated
Was this helpful?
Simple answer, based on recent developer changes and the trained model!
When Appsurify connects the Repository and begins to receive Incoming Commit data - Appsurify initially analyzes the Repository for basic structuring.
Once the Automated Tests are connected, Appsurify leverages Patent-Pending proprietary AI-Risk Based Testing Technology to train each company's unique AI-Model based off of incoming commits and the corresponding test results. Over a period of 2-3 weeks, Appsurify trains a robust AI Model that links Code to Tests - because it will know which Commits impacted which Tests to form a union between the two.
Once the AI Model is Trained, when a new Commit comes in, Appsurify will know which Tests are associated with that region of the Codebase - and subsequently; select and execute the relevant tests by order of Priority automatically in the CI/CD based on the Parameters set by the team
The AI Model is recalibrated on a rolling basis to ensure it's always up to date. For more information on on our Test Selection please see our on the Appsurify website.
Appsurify is designed to catch as many defects as possible by running as few tests as possible.
Appsurify is designed to be efficient in finding bugs in its Smart Subset while leaving room to find other potential defects that cause other test failures. 1 Bug often causes more than 1 Test Failure, often multiple. To be efficient and fast, Appsurify is designed to catch as many bugs in as few tests as possible.