Stay organized with collections
Save and categorize content based on your preferences.
Vertex AI provides Docker container images that you run as prebuilt
containers for serving inferences and explanations from trained model
artifacts. These containers, which are organized by machine learning (ML)
framework and framework version, provide HTTP inference
servers that you can use to
serve inferences with minimal configuration. In many cases, using a prebuilt
container is simpler than creating your own custom container for
inference.
This document lists the prebuilt containers for inferences and explanations,
and it describes how to use them with model artifacts that you created using
Vertex AI's custom training
functionality or model artifacts that you
created outside of Vertex AI.
Support policy and schedule
Vertex AI supports each framework version based on a schedule to
minimize security vulnerabilities. Review the
Support policy schedule to understand the implications of
the end-of-support and end-of-availability dates.
Available container images
Each of the following container images is available in several
Artifact Registry repositories, which store data in various
locations. You can use any of
the URIs for an image when you perform custom training; each provides the same
container image. If you use the Google Cloud console to create a
Model resource,
the Google Cloud console selects the URI that best matches the location where
you are using Vertex AI in order to reduce
latency.
TensorFlow
Available TensorFlow container images (Click to expand)
ML framework version
Supported accelerators (and CUDA version, if applicable)
To use one of these prebuilt containers, you must save your model as one or
more model artifacts that comply with the requirements of the prebuilt
container. For more information, see
Export model artifacts for inference.
The following notebooks demonstrate how to use a prebuilt container to serve
inferences.
What do you want to do?
Notebook
Train and serve a TensorFlow model using a prebuilt container
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[],[],null,["# Prebuilt containers for inference and explanation\n\nVertex AI provides Docker container images that you run as *prebuilt\ncontainers* for serving inferences and [explanations](/vertex-ai/docs/explainable-ai/overview) from trained model\nartifacts. These containers, which are organized by machine learning (ML)\nframework and framework version, provide [HTTP inference\nservers](/vertex-ai/docs/predictions/custom-container-requirements#server) that you can use to\nserve inferences with minimal configuration. In many cases, using a prebuilt\ncontainer is simpler than [creating your own custom container for\ninference](/vertex-ai/docs/predictions/use-custom-container).\n\nThis document lists the prebuilt containers for inferences and explanations,\nand it describes how to use them with model artifacts that you [created using\nVertex AI's custom training\nfunctionality](/vertex-ai/docs/training/code-requirements) or model artifacts that you\ncreated outside of Vertex AI.\n\nSupport policy and schedule\n---------------------------\n\nVertex AI supports each framework version based on a schedule to\nminimize security vulnerabilities. Review the\n[Support policy schedule](/vertex-ai/docs/framework-support-policy#support_policy_schedule) to understand the implications of\nthe end-of-support and end-of-availability dates.\n\nAvailable container images\n--------------------------\n\nEach of the following container images is available in several\nArtifact Registry repositories, which [store data in various\nlocations](/artifact-registry/docs/repo-locations). You can use any of\nthe URIs for an image when you perform custom training; each provides the same\ncontainer image. If you use the Google Cloud console to create a\n[`Model`](/vertex-ai/docs/reference/rest/v1/projects.locations.models) resource,\nthe Google Cloud console selects the URI that best matches the [location where\nyou are using Vertex AI](/vertex-ai/docs/general/locations) in order to reduce\nlatency.\n| **Note:** Using image names without the `latest` tag isn't supported. You must use an image with the `latest` tag.\n\n### TensorFlow\n\n#### Available TensorFlow container images (Click to expand)\n\n### Optimized TensorFlow runtime\n\nThe following container images use the optimized TensorFlow runtime. For\nmore information, see [Use the optimized TensorFlow runtime](/vertex-ai/docs/predictions/optimized-tensorflow-runtime). \n\n#### Available optimized TensorFlow runtime container images (Click to expand)\n\n\u003cbr /\u003e\n\n### PyTorch\n\n#### Available PyTorch container images (Click to expand)\n\n### scikit-learn\n\n#### Available scikit-learn container images (Click to expand)\n\n### XGBoost\n\n#### Available XGBoost container images (Click to expand)\n\nUse a prebuilt container\n------------------------\n\nYou can specify a prebuilt container for inference when you\n[create a custom `TrainingPipeline` resource that uploads a `Model`](/vertex-ai/docs/training/create-training-pipeline#custom-job-model-upload) or when\nyou [import model artifacts as a `Model`](/vertex-ai/docs/model-registry/import-model).\n\nTo use one of these prebuilt containers, you must save your model as one or\nmore *model artifacts* that comply with the requirements of the prebuilt\ncontainer. For more information, see\n[Export model artifacts for inference](/vertex-ai/docs/training/exporting-model-artifacts).\n\nThe following notebooks demonstrate how to use a prebuilt container to serve\ninferences.\n\nNotebooks\n---------\n\n| To learn more,\n| run the \"Serving PyTorch image models with prebuilt containers on Vertex AI\" notebook in one of the following\n| environments:\n|\n| [Open in Colab](https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/prediction/pytorch_image_classification_with_prebuilt_serving_containers.ipynb)\n|\n|\n| \\|\n|\n| [Open in Colab Enterprise](https://console.cloud.google.com/vertex-ai/colab/import/https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fprediction%2Fpytorch_image_classification_with_prebuilt_serving_containers.ipynb)\n|\n|\n| \\|\n|\n| [Open\n| in Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmain%2Fnotebooks%2Fofficial%2Fprediction%2Fpytorch_image_classification_with_prebuilt_serving_containers.ipynb)\n|\n|\n| \\|\n|\n| [View on GitHub](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/prediction/pytorch_image_classification_with_prebuilt_serving_containers.ipynb)\n\nWhat's next\n-----------\n\n- Learn how to [deploy a model to an endpoint to serve\n inferences](/vertex-ai/docs/predictions/deploy-model-api)."]]