• Documentation
    • About ​ValidMind
    • Get Started
    • Guides
    • Support
    • Releases

    • ValidMind Library
    • Python API
    • Public REST API

    • Training Courses
  • Log In
  1. Run tests & test suites
  • ValidMind Library
  • Supported models

  • Quickstart
  • Quickstart for model documentation
  • Quickstart for model validation
  • Install and initialize ValidMind Library
  • Store model credentials in .env files

  • End-to-End Tutorials
  • Model development
    • 1 — Set up ValidMind Library
    • 2 — Start model development process
    • 3 — Integrate custom tests
    • 4 — Finalize testing & documentation
  • Model validation
    • 1 — Set up ValidMind Library for validation
    • 2 — Start model validation process
    • 3 — Developing a challenger model
    • 4 — Finalize validation & reporting

  • How-To
  • Run tests & test suites
    • Explore tests
      • Explore tests
      • Explore test suites
      • Test sandbox beta
    • Run tests
      • Run dataset based tests
      • Run comparison tests
      • Configuring tests
        • Customize test result descriptions
        • Enable PII detection in tests
        • Dataset Column Filters when Running Tests
        • Run tests with multiple datasets
        • Understand and utilize RawData in ValidMind tests
      • Using tests in documentation
        • Document multiple results for the same test
        • Run individual documentation sections
        • Run documentation tests with custom configurations
    • Custom tests
      • Implement custom tests
      • Integrate external test providers
  • Use library features
    • Data and datasets
      • Introduction to ValidMind Dataset and Model Objects
      • Dataset inputs
        • Configure dataset features
        • Load dataset predictions
    • Metrics
      • Log metrics over time
      • Intro to Unit Metrics
    • Scoring
      • Intro to Assign Scores

  • Notebooks
  • Code samples
    • Agents
      • Document an agentic AI system
    • Capital markets
      • Quickstart for knockout option pricing model documentation
      • Quickstart for Heston option pricing model using QuantLib
    • Code explainer
      • Quickstart for model code documentation
    • Credit risk
      • Document an application scorecard model
      • Document an application scorecard model
      • Document a credit risk model
      • Document an application scorecard model
      • Document an Excel-based application scorecard model
    • Model validation
      • Validate an application scorecard model
    • NLP and LLM
      • Sentiment analysis of financial data using a large language model (LLM)
      • Summarization of financial data using a large language model (LLM)
      • Sentiment analysis of financial data using Hugging Face NLP models
      • Summarization of financial data using Hugging Face NLP models
      • Automate news summarization using LLMs
      • Prompt validation for large language models (LLMs)
      • RAG Model Benchmarking Demo
      • RAG Model Documentation Demo
    • Ongoing monitoring
      • Ongoing Monitoring for Application Scorecard
      • Quickstart for ongoing monitoring of models with ValidMind
    • Regression
      • Document a California Housing Price Prediction regression model
    • Time series
      • Document a time series forecasting model
      • Document a time series forecasting model

  • Reference
  • Test descriptions
    • Data Validation
      • ACFandPACFPlot
      • ADF
      • AutoAR
      • AutoMA
      • AutoStationarity
      • BivariateScatterPlots
      • BoxPierce
      • ChiSquaredFeaturesTable
      • ClassImbalance
      • DatasetDescription
      • DatasetSplit
      • DescriptiveStatistics
      • DickeyFullerGLS
      • Duplicates
      • EngleGrangerCoint
      • FeatureTargetCorrelationPlot
      • HighCardinality
      • HighPearsonCorrelation
      • IQROutliersBarPlot
      • IQROutliersTable
      • IsolationForestOutliers
      • JarqueBera
      • KPSS
      • LaggedCorrelationHeatmap
      • LJungBox
      • MissingValues
      • MissingValuesBarPlot
      • MutualInformation
      • PearsonCorrelationMatrix
      • PhillipsPerronArch
      • ProtectedClassesCombination
      • ProtectedClassesDescription
      • ProtectedClassesDisparity
      • ProtectedClassesThresholdOptimizer
      • RollingStatsPlot
      • RunsTest
      • ScatterPlot
      • ScoreBandDefaultRates
      • SeasonalDecompose
      • ShapiroWilk
      • Skewness
      • SpreadPlot
      • TabularCategoricalBarPlots
      • TabularDateTimeHistograms
      • TabularDescriptionTables
      • TabularNumericalHistograms
      • TargetRateBarPlots
      • TimeSeriesDescription
      • TimeSeriesDescriptiveStatistics
      • TimeSeriesFrequency
      • TimeSeriesHistogram
      • TimeSeriesLinePlot
      • TimeSeriesMissingValues
      • TimeSeriesOutliers
      • TooManyZeroValues
      • UniqueRows
      • WOEBinPlots
      • WOEBinTable
      • ZivotAndrewsArch
      • Nlp
        • CommonWords
        • Hashtags
        • LanguageDetection
        • Mentions
        • PolarityAndSubjectivity
        • Punctuations
        • Sentiment
        • StopWords
        • TextDescription
        • Toxicity
    • Model Validation
      • BertScore
      • BleuScore
      • ClusterSizeDistribution
      • ContextualRecall
      • FeaturesAUC
      • MeteorScore
      • ModelMetadata
      • ModelPredictionResiduals
      • RegardScore
      • RegressionResidualsPlot
      • RougeScore
      • TimeSeriesPredictionsPlot
      • TimeSeriesPredictionWithCI
      • TimeSeriesR2SquareBySegments
      • TokenDisparity
      • ToxicityScore
      • Embeddings
        • ClusterDistribution
        • CosineSimilarityComparison
        • CosineSimilarityDistribution
        • CosineSimilarityHeatmap
        • DescriptiveAnalytics
        • EmbeddingsVisualization2D
        • EuclideanDistanceComparison
        • EuclideanDistanceHeatmap
        • PCAComponentsPairwisePlots
        • StabilityAnalysisKeyword
        • StabilityAnalysisRandomNoise
        • StabilityAnalysisSynonyms
        • StabilityAnalysisTranslation
        • TSNEComponentsPairwisePlots
      • Ragas
        • AnswerCorrectness
        • AspectCritic
        • ContextEntityRecall
        • ContextPrecision
        • ContextPrecisionWithoutReference
        • ContextRecall
        • Faithfulness
        • NoiseSensitivity
        • ResponseRelevancy
        • SemanticSimilarity
      • Sklearn
        • AdjustedMutualInformation
        • AdjustedRandIndex
        • CalibrationCurve
        • ClassifierPerformance
        • ClassifierThresholdOptimization
        • ClusterCosineSimilarity
        • ClusterPerformanceMetrics
        • CompletenessScore
        • ConfusionMatrix
        • FeatureImportance
        • FowlkesMallowsScore
        • HomogeneityScore
        • HyperParametersTuning
        • KMeansClustersOptimization
        • MinimumAccuracy
        • MinimumF1Score
        • MinimumROCAUCScore
        • ModelParameters
        • ModelsPerformanceComparison
        • OverfitDiagnosis
        • PermutationFeatureImportance
        • PopulationStabilityIndex
        • PrecisionRecallCurve
        • RegressionErrors
        • RegressionErrorsComparison
        • RegressionPerformance
        • RegressionR2Square
        • RegressionR2SquareComparison
        • RobustnessDiagnosis
        • ROCCurve
        • ScoreProbabilityAlignment
        • SHAPGlobalImportance
        • SilhouettePlot
        • TrainingTestDegradation
        • VMeasure
        • WeakspotsDiagnosis
      • Statsmodels
        • AutoARIMA
        • CumulativePredictionProbabilities
        • DurbinWatsonTest
        • GINITable
        • KolmogorovSmirnov
        • Lilliefors
        • PredictionProbabilitiesHistogram
        • RegressionCoeffs
        • RegressionFeatureSignificance
        • RegressionModelForecastPlot
        • RegressionModelForecastPlotLevels
        • RegressionModelSensitivityPlot
        • RegressionModelSummary
        • RegressionPermutationFeatureImportance
        • ScorecardHistogram
    • Ongoing Monitoring
      • CalibrationCurveDrift
      • ClassDiscriminationDrift
      • ClassificationAccuracyDrift
      • ClassImbalanceDrift
      • ConfusionMatrixDrift
      • CumulativePredictionProbabilitiesDrift
      • FeatureDrift
      • PredictionAcrossEachFeature
      • PredictionCorrelation
      • PredictionProbabilitiesHistogramDrift
      • PredictionQuantilesAcrossFeatures
      • ROCCurveDrift
      • ScoreBandsDrift
      • ScorecardHistogramDrift
      • TargetPredictionDistributionPlot
    • Plots
      • BoxPlot
      • CorrelationHeatmap
      • HistogramPlot
      • ViolinPlot
    • Prompt Validation
      • Bias
      • Clarity
      • Conciseness
      • Delimitation
      • NegativeInstruction
      • Robustness
      • Specificity
    • Stats
      • CorrelationAnalysis
      • DescriptiveStats
      • NormalityTests
      • OutlierDetection
  • ValidMind Library Python API
  • ValidMind Public REST API

On this page

  • Explore tests
  • Run tests
  • What's next
  • Edit this page
  • Report an issue

How to run tests and test suites

Published

February 11, 2026

​ValidMind provides many built-in tests and test suites, which help you produce documentation during stages of the model lifecycle, where you need to ensure that your work satisfies regulatory and risk management requirements.

Explore tests

Start by exploring the ValidMind Library's available tests and tests suites:

  • Explore tests using the library
  • Explore tests in our documentation
Explore tests
Explore the individual out-the-box tests available in the ValidMind Library, and identify which tests to run to evaluate different aspects of your model. Browse available tests, view their descriptions, and filter by tags or task type to find tests relevant to your use case.
Explore test suites
Explore ValidMind test suites, pre-built collections of related tests used to evaluate specific aspects of your model. Retrieve available test suites and details for tests within a suite to understand their functionality, allowing you to select the appropriate test suites for your use cases.
No matching items
Test sandbox beta
Explore our interactive sandbox to see what tests are available in the ValidMind Library and how you can use them in your own code.
Test descriptions
Tests that are available as part of the ValidMind Library, grouped by type of validation or monitoring test.
No matching items

When do I use tests and test suites?

While you have the flexibility to decide when to use which ​ValidMind tests, we have identified a few typical scenarios with their own characteristics and needs:

  • Dataset testing
  • Model testing
  • End-to-end testing

To document and validate your dataset:

  • For generic tabular datasets: use the tabular_dataset test suite.
  • For time-series datasets: use the time_series_dataset test suite.

To document and validate your model:

  • For binary classification models: use the classifier test suite.
  • For time series models: use the timeseries test suite.

To document a binary classification model and the relevant dataset end-to-end:

Use the classifier_full_suite test suite.

Run tests

Next, learn how to use the ValidMind Library to run tests on datasets and models:

  • Test running basics
  • Configuring tests
  • Using tests in documentation
Run dataset based tests
Use the ValidMind Library's run_test function to run built-in or custom tests that take any dataset or model as input. These tests generate outputs in the form of text, tables, and images that get populated in model documentation.
Run comparison tests
Use the ValidMind Library's run_test function to run built-in or custom tests that take any combination of datasets or models as inputs. Comparison tests allow you to run existing test over different groups of inputs and produces a single consolidated list of outputs in the form of text, tables, and images that get populated in model documentation.
No matching items
Customize test result descriptions
When you run ValidMind tests, test descriptions are automatically generated with LLM using the test results, the test name, and the static test definitions provided in the test's docstring. While this metadata offers valuable high-level overviews of tests, insights produced by the LLM-based descriptions may not always align with your specific use…
Enable PII detection in tests
Learn how to enable and configure Personally Identifiable Information (PII) detection when running tests with the ValidMind Library. Choose whether or not to include PII in test descriptions generated, or whether or not to include PII in test results logged to the ValidMind Platform.
Dataset Column Filters when Running Tests
To run a test on a dataset but only include certain columns from that dataset, you can use the new columns option. This is done by passing a dictionary for the dataset input for the test instead of the dataset object or dataset input ID directly. This dictionary should have the following keys:
Run tests with multiple datasets
To support running tests that require more than one dataset, ValidMind provides a mechanim that allows you to pass multiple datasets as inputs.
Understand and utilize RawData in ValidMind tests
Test functions in ValidMind can return a special object called RawData, which holds intermediate or unprocessed data produced somewhere in the test logic but not returned as part of the test's visible output, such as in tables or figures.
No matching items
Document multiple results for the same test
Documentation templates facilitate the presentation of multiple unique test results for a single test.
Run individual documentation sections
For targeted testing, you can run tests on individual sections or specific groups of sections in your model documentation.
Run documentation tests with custom configurations
When running documentation tests, you can configure inputs and parameters for individual tests by passing a config as a parameter.
No matching items

Can I use my own tests?

Absolutely! ​ValidMind supports custom tests that you develop yourself or that are provided by third-party test libraries, also referred to as test providers.

We provide instructions with code examples that you can adapt:

Implement custom tests
Custom tests extend the functionality of ValidMind, allowing you to document any model or use case with added flexibility.
Integrate external test providers
Register a custom test provider with the ValidMind Library to run your own tests.
No matching items

What's next

Learn more about using the other features of the ValidMind Library and browse our code samples by use case:

How to use ValidMind Library features
Browse our range of Jupyter Notebooks demonstrating how to use the core features of the ValidMind Library. Use these how-to notebooks to get familiar with the library's capabilities and apply them to your own use cases.
Code samples
Our Jupyter Notebook code samples showcase the capabilities and features of the ValidMind Library, while also providing you with useful examples that you can build on and adapt for your own use cases.
No matching items
Explore tests
  • ValidMind Logo
    ©
    Copyright 2026 ValidMind Inc.
    All Rights Reserved.
    Cookie preferences
    Legal
  • Get started
    • Model development
    • Model validation
    • Setup & admin
  • Guides
    • Access
    • Configuration
    • Model inventory
    • Model documentation
    • Model validation
    • Workflows
    • Reporting
    • Monitoring
    • Attestation
  • Library
    • For developers
    • For validators
    • Code samples
    • Python API
    • Public REST API
  • Training
    • Learning paths
    • Courses
    • Videos
  • Support
    • Troubleshooting
    • FAQ
    • Get help
  • Community
    • GitHub
    • LinkedIn
    • Events
    • Blog
  • Edit this page
  • Report an issue