Stay organized with collections
Save and categorize content based on your preferences.
As you consider the explanations returned from the service, you should keep in
mind these high-level limitations. For an in-depth explanation, refer to the
AI Explanations Whitepaper.
Meaning and scope of feature attributions
Consider the following when analyzing feature attributions provided by
Vertex Explainable AI:
Each attribution only shows how much the feature affected the prediction for
that particular example. A single attribution might not reflect the overall
behavior of the model. To understand approximate model behavior on an entire
dataset, aggregate attributions over the entire dataset.
The attributions depend entirely on the model and data used to train the
model. They can only reveal the patterns the model found in the data, and
can't
detect any fundamental relationships in the data. The presence or absence
of a strong attribution to a certain feature doesn't mean there is or is not a
relationship between that feature and the target. The attribution merely shows
that the model is or is not using the feature in its predictions.
Attributions alone can't tell if your model is fair, unbiased, or of sound
quality. Carefully evaluate your training data and evaluation
metrics in addition to the attributions.
Improving feature attributions
When you are working with custom-trained models, you can configure specific
parameters to improve your explanations. This section does not apply
to AutoML models.
The following factors have the highest impact on feature attributions:
The attribution methods approximate the Shapley value. You can increase the
precision of the approximation by:
Increasing the number of integral steps for the integrated gradients or
XRAI methods.
Increasing the number of integral paths for the sampled Shapley method.
As a result, the attributions could change dramatically.
The attributions only express how much the feature affected the change in
prediction value, relative to the baseline value. Be sure to choose a
meaningful baseline, relevant to the question you're asking of the model.
Attribution values and their interpretation might change significantly as you
switch baselines.
For integrated gradients and XRAI, using two baselines can improve your
results. For example, you can specify baselines that represent an entirely
black image and an entirely white image.
The two attribution methods that support image data are integrated gradients
and XRAI.
Integrated gradients is a pixel-based attribution method that highlights
important areas in the image regardless of contrast, making this method ideal
for non-natural images such as X-rays. However, the granular output can make it
difficult to assess the relative importance of areas. The default output
highlights areas in the image that have high positive attributions by drawing
outlines, but these outlines are not ranked and may span across objects.
XRAI works best on natural, higher-contrast images containing multiple objects.
Because this method produces region-based attributions, it produces a smoother,
more human-readable heatmap of regions that are most salient for a given
image classification.
XRAI does not work well on the following types of image input:
Low-contrast images that are all one shade, such as X-rays.
Very tall or very wide images, such as panoramas.
Very large images, which may slow down overall runtime.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[],[],null,["# Limitations of Vertex Explainable AI\n\nAs you consider the explanations returned from the service, you should keep in\nmind these high-level limitations. For an in-depth explanation, refer to the\n[AI Explanations Whitepaper](https://storage.googleapis.com/cloud-ai-whitepapers/AI%20Explainability%20Whitepaper.pdf).\n\nMeaning and scope of feature attributions\n-----------------------------------------\n\nConsider the following when analyzing feature attributions provided by\nVertex Explainable AI:\n\n- Each attribution only shows how much the feature affected the prediction for that particular example. A single attribution might not reflect the overall behavior of the model. To understand approximate model behavior on an entire dataset, aggregate attributions over the entire dataset.\n- The attributions depend entirely on the model and data used to train the model. They can only reveal the patterns the model found in the data, and can't detect any fundamental relationships in the data. The presence or absence of a strong attribution to a certain feature doesn't mean there is or is not a relationship between that feature and the target. The attribution merely shows that the model is or is not using the feature in its predictions.\n- Attributions alone can't tell if your model is fair, unbiased, or of sound quality. Carefully evaluate your training data and evaluation metrics in addition to the attributions.\n\nImproving feature attributions\n------------------------------\n\nWhen you are working with custom-trained models, you can configure specific\nparameters to improve your explanations. This section does not apply\nto AutoML models.\n\nThe following factors have the highest impact on feature attributions:\n\n- The attribution methods approximate the Shapley value. You can increase the\n precision of the approximation by:\n\n - Increasing the number of integral steps for the integrated gradients or XRAI methods.\n - Increasing the number of integral paths for the sampled Shapley method.\n\n As a result, the attributions could change dramatically.\n- The attributions only express how much the feature affected the change in\n prediction value, relative to the baseline value. Be sure to choose a\n meaningful baseline, relevant to the question you're asking of the model.\n Attribution values and their interpretation might change significantly as you\n switch baselines.\n\n- For integrated gradients and XRAI, using two baselines can improve your\n results. For example, you can specify baselines that represent an entirely\n black image and an entirely white image.\n\nRead more about [improving feature\nattributions](/vertex-ai/docs/explainable-ai/improving-explanations).\n\nLimitations for image data\n--------------------------\n\nThe two attribution methods that support image data are integrated gradients\nand XRAI.\n\nIntegrated gradients is a pixel-based attribution method that highlights\nimportant areas in the image regardless of contrast, making this method ideal\nfor non-natural images such as X-rays. However, the granular output can make it\ndifficult to assess the relative importance of areas. The default output\nhighlights areas in the image that have high positive attributions by drawing\noutlines, but these outlines are not ranked and may span across objects.\n\nXRAI works best on natural, higher-contrast images containing multiple objects.\nBecause this method produces region-based attributions, it produces a smoother,\nmore human-readable heatmap of regions that are most salient for a given\nimage classification.\n\nXRAI does *not* work well on the following types of image input:\n\n- Low-contrast images that are all one shade, such as X-rays.\n- Very tall or very wide images, such as panoramas.\n- Very large images, which may slow down overall runtime."]]