Tabular Workflow for End-to-End AutoML is a complete AutoML
pipeline for classification and regression tasks. It is similar to the
AutoML API,
but allows you to choose what to control and what to automate. Instead of having
controls for the whole pipeline, you have controls for every step in the
pipeline. These pipeline controls include:
Data splitting
Feature engineering
Architecture search
Model training
Model ensembling
Model distillation
Benefits
The following lists some of the benefits of
Tabular Workflow for End-to-End AutoML
:
Supports large datasets that are multiple TB in size and have up to 1000 columns.
Allows you to improve stability and lower training time by limiting the search space of architecture types or skipping architecture search.
Allows you to improve training speed by manually selecting the hardware used for training and architecture search.
Allows you to reduce model size and improve latency with distillation or by changing the ensemble size.
Each AutoML component can be inspected in a powerful pipelines graph interface that lets you see the transformed data tables, evaluated model architectures, and many more details.
Each AutoML component gets extended flexibility and transparency, such as being able to customize parameters, hardware, view process status, logs, and more.
End-to-End AutoML on Vertex AI Pipelines
Tabular Workflow for End-to-End AutoML
is a managed instance of Vertex AI Pipelines.
Vertex AI Pipelines is a serverless
service that runs Kubeflow pipelines. You can use pipelines to automate
and monitor your machine learning and data preparation tasks. Each step in a
pipeline performs part of the pipeline's workflow. For example,
a pipeline can include steps to split data, transform data types, and train a model. Since steps
are instances of pipeline components, steps have inputs, outputs, and a
container image. Step inputs can be set from the pipeline's inputs or they can
depend on the output of other steps within this pipeline. These dependencies
define the pipeline's workflow as a directed acyclic graph.
Overview of pipeline and components
The following diagram shows the modeling pipeline for
Tabular Workflow for End-to-End AutoML
:
The pipeline components are:
feature-transform-engine: Performs feature engineering. See
Feature Transform Engine for details.
split-materialized-data:
Split the materialized data into a training set, an evaluation set, and a test set.
Input:
Materialized data materialized_data.
Output:
Materialized training split materialized_train_split.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[],[],null,["# Tabular Workflow for End-to-End AutoML\n\nThis document provides an overview of the End-to-End AutoML\n[pipeline and components](#components). To learn how to train a model with End-to-End AutoML,\nsee [Train a model with End-to-End AutoML](/vertex-ai/docs/tabular-data/tabular-workflows/e2e-automl-train).\n\n\nTabular Workflow for End-to-End AutoML is a complete AutoML\npipeline for classification and regression tasks. It is similar to the\n[AutoML API](/vertex-ai/docs/tabular-data/classification-regression/overview),\nbut allows you to choose what to control and what to automate. Instead of having\ncontrols for the *whole* pipeline, you have controls for *every step* in the\npipeline. These pipeline controls include:\n\n- Data splitting\n- Feature engineering\n- Architecture search\n- Model training\n- Model ensembling\n- Model distillation\n\n\u003cbr /\u003e\n\nBenefits\n--------\n\nThe following lists some of the benefits of\nTabular Workflow for End-to-End AutoML\n:\n\n\n- Supports **large datasets** that are multiple TB in size and have up to 1000 columns.\n- Allows you to **improve stability and lower training time** by limiting the search space of architecture types or skipping architecture search.\n- Allows you to **improve training speed** by manually selecting the hardware used for training and architecture search.\n- Allows you to **reduce model size and improve latency** with distillation or by changing the ensemble size.\n- Each AutoML component can be inspected in a powerful pipelines graph interface that lets you see the transformed data tables, evaluated model architectures, and many more details.\n- Each AutoML component gets extended flexibility and transparency, such as being able to customize parameters, hardware, view process status, logs, and more.\n\n\u003cbr /\u003e\n\nEnd-to-End AutoML on Vertex AI Pipelines\n----------------------------------------\n\n\nTabular Workflow for End-to-End AutoML\nis a managed instance of Vertex AI Pipelines.\n\n\n[Vertex AI Pipelines](/vertex-ai/docs/pipelines/introduction) is a serverless\nservice that runs Kubeflow pipelines. You can use pipelines to automate\nand monitor your machine learning and data preparation tasks. Each step in a\npipeline performs part of the pipeline's workflow. For example,\na pipeline can include steps to split data, transform data types, and train a model. Since steps\nare instances of pipeline components, steps have inputs, outputs, and a\ncontainer image. Step inputs can be set from the pipeline's inputs or they can\ndepend on the output of other steps within this pipeline. These dependencies\ndefine the pipeline's workflow as a directed acyclic graph.\n\nOverview of pipeline and components\n-----------------------------------\n\nThe following diagram shows the modeling pipeline for\nTabular Workflow for End-to-End AutoML\n:\n\n\u003cbr /\u003e\n\nThe pipeline components are:\n\n1. **feature-transform-engine** : Performs feature engineering. See [Feature Transform Engine](/vertex-ai/docs/tabular-data/tabular-workflows/feature-engineering) for details.\n2. **split-materialized-data** : Split the materialized data into a training set, an evaluation set, and a test set.\n\n \u003cbr /\u003e\n\n Input:\n - Materialized data `materialized_data`.\n\n Output:\n - Materialized training split `materialized_train_split`.\n - Materialized evaluation split `materialized_eval_split`.\n - Materialized test set `materialized_test_split`.\n3. **merge-materialized-splits** - Merges the materialized evaluation split and the materialized train split.\n4. **automl-tabular-stage-1-tuner** - Performs model architecture search and tunes hyperparameters.\n\n - An architecture is defined by a set of hyperparameters.\n - Hyperparameters include the model type and the model parameters.\n - Model types considered are neural networks and boosted trees.\n - The system trains a model for each architecture considered.\n5. **automl-tabular-cv-trainer** - Cross-validates architectures by training models on different folds of the input data.\n\n - The architectures considered are those that give the best results in the previous step.\n - The system selects approximately ten best architectures. The precise number is defined by the training budget.\n6. **automl-tabular-ensemble** - Ensembles the best architectures to produce a final model.\n\n - The following diagram illustrates K-fold cross-validation with bagging:\n\n \u003cbr /\u003e\n\n7. **condition-is-distill** - **Optional**. Creates a smaller version of the ensemble model.\n\n - A smaller model reduces latency and cost for inference.\n8. **automl-tabular-infra-validator** - Validates whether the trained model is a valid model.\n\n9. **model-upload** - Uploads the model.\n\n10. **condition-is-evaluation** - **Optional**. Uses the test set to calculate evaluation metrics.\n\nWhat's next\n-----------\n\n- [Train a model using End-to-End\n AutoML](/vertex-ai/docs/tabular-data/tabular-workflows/e2e-automl-train)."]]