Dataflow ML supports batch and streaming pipelines.
The RunInference API is supported in Apache Beam 2.40.0 and later versions.
The MLTransform API is supported in Apache Beam 2.53.0 and later versions.
Model handlers are available for PyTorch, scikit-learn, TensorFlow,
ONNX, and TensorRT.
For unsupported frameworks, you can use a custom model handler.
Data preparation for training
Use the MLTransform feature to prepare your data for training ML models. For
more information, see
Preprocess data with MLTransform.
Dataflow ML combines the power of Dataflow with
Apache Beam's
RunInference API.
With the RunInference API, you define the model's characteristics and properties
and pass that configuration to the RunInference transform. This feature
allows users to run the model within their
Dataflow pipelines without needing to know
the model's implementation details. You can choose the framework that best
suits your data, such as TensorFlow and PyTorch.
Run multiple models in a pipeline
Use the RunInference transform to add multiple inference models to
your Dataflow pipeline. For more information, including code details,
see Multi-model pipelines
in the Apache Beam documentation.
Build a cross-language pipeline
To use RunInference with a Java pipeline,
create a cross-language Python transform. The pipeline calls the
transform, which does the preprocessing, postprocessing, and inference.
For batch or streaming pipelines that require the use of accelerators, you can
run Dataflow pipelines on NVIDIA GPU devices. For more information, see
Run a Dataflow pipeline with GPUs.
Troubleshoot Dataflow ML
This section provides troubleshooting strategies and links that you might find
helpful when using Dataflow ML.
Stack expects each tensor to be equal size
If you provide images of different sizes or word embeddings of different lengths
when using the RunInference API, the following error might occur:
File "/beam/sdks/python/apache_beam/ml/inference/pytorch_inference.py", line 232, in run_inference batched_tensors = torch.stack(key_to_tensor_list[key]) RuntimeError: stack expects each tensor to be equal size, but got [12] at entry 0 and [10] at entry 1 [while running 'PyTorchRunInference/ParDo(_RunInferenceDoFn)']
This error occurs because the RunInference API can't batch tensor elements of
different sizes. For workarounds, see
Unable to batch tensor elements
in the Apache Beam documentation.
Avoid out-of-memory errors with large models
When you load a medium or large ML model, your machine might run out of memory.
Dataflow provides tools to help avoid out-of-memory (OOM) errors
when loading ML models. Use the following table to determine the appropriate
approach for your scenario.
Scenario
Solution
The models are small enough to fit in memory.
Use the RunInference transform without any additional
configurations. The RunInference transform shares the models across
threads. If you can fit one model per CPU core on your machine, then
your pipeline can use the default configuration.
Multiple differently-trained models are performing the same task.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-26 UTC."],[[["\u003cp\u003eDataflow ML facilitates both prediction and inference pipelines, as well as data preparation for training ML models.\u003c/p\u003e\n"],["\u003cp\u003eDataflow ML supports both batch and streaming data pipelines, utilizing the \u003ccode\u003eRunInference\u003c/code\u003e API (from Apache Beam 2.40.0) and \u003ccode\u003eMLTransform\u003c/code\u003e API (from Apache Beam 2.53.0).\u003c/p\u003e\n"],["\u003cp\u003eThe system is compatible with model handlers for popular frameworks like PyTorch, scikit-learn, TensorFlow, ONNX, and TensorRT, with options for custom handlers for other frameworks.\u003c/p\u003e\n"],["\u003cp\u003eDataflow ML enables the use of multiple inference models within a single pipeline via the \u003ccode\u003eRunInference\u003c/code\u003e transform and supports the use of GPUs for pipelines that need them.\u003c/p\u003e\n"],["\u003cp\u003eDataflow ML also provides troubleshooting guidance for common issues, including tensor size mismatch errors and out-of-memory errors when dealing with large models.\u003c/p\u003e\n"]]],[],null,["# About Dataflow ML\n\nYou can use Dataflow ML's scale data processing abilities for\n[prediction and inference pipelines](#prediction) and for\n[data preparation for training](#data-prep).\n\n**Figure 1.** The complete Dataflow ML workflow.\n\nRequirements and limitations\n----------------------------\n\n- Dataflow ML supports batch and streaming pipelines.\n- The `RunInference` API is supported in Apache Beam 2.40.0 and later versions.\n- The `MLTransform` API is supported in Apache Beam 2.53.0 and later versions.\n- Model handlers are available for PyTorch, scikit-learn, TensorFlow, ONNX, and TensorRT. For unsupported frameworks, you can use a custom model handler.\n\nData preparation for training\n-----------------------------\n\n- Use the `MLTransform` feature to prepare your data for training ML models. For\n more information, see\n [Preprocess data with `MLTransform`](/dataflow/docs/machine-learning/ml-preprocess-data).\n\n- Use Dataflow with ML-OPS frameworks, such as\n [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/v1/introduction/)\n (KFP) or [TensorFlow Extended](https://www.tensorflow.org/tfx) (TFX).\n To learn more, see [Dataflow ML in ML workflows](/dataflow/docs/machine-learning/ml-data).\n\nPrediction and inference pipelines\n----------------------------------\n\nDataflow ML combines the power of Dataflow with\nApache Beam's\n[`RunInference` API](https://beam.apache.org/documentation/ml/about-ml/).\nWith the `RunInference` API, you define the model's characteristics and properties\nand pass that configuration to the `RunInference` transform. This feature\nallows users to run the model within their\nDataflow pipelines without needing to know\nthe model's implementation details. You can choose the framework that best\nsuits your data, such as TensorFlow and PyTorch.\n\nRun multiple models in a pipeline\n---------------------------------\n\nUse the `RunInference` transform to add multiple inference models to\nyour Dataflow pipeline. For more information, including code details,\nsee [Multi-model pipelines](https://beam.apache.org/documentation/ml/about-ml/#multi-model-pipelines)\nin the Apache Beam documentation.\n\nBuild a cross-language pipeline\n-------------------------------\n\nTo use RunInference with a Java pipeline,\n[create a cross-language Python transform](https://beam.apache.org/documentation/programming-guide/#1312-creating-cross-language-python-transforms). The pipeline calls the\ntransform, which does the preprocessing, postprocessing, and inference.\n\nFor detailed instructions and a sample pipeline, see\n[Using RunInference from the Java SDK](https://beam.apache.org/documentation/ml/multi-language-inference/).\n\nUse GPUs with Dataflow\n----------------------\n\nFor batch or streaming pipelines that require the use of accelerators, you can\nrun Dataflow pipelines on NVIDIA GPU devices. For more information, see\n[Run a Dataflow pipeline with GPUs](/dataflow/docs/gpu/use-gpus).\n\nTroubleshoot Dataflow ML\n------------------------\n\nThis section provides troubleshooting strategies and links that you might find\nhelpful when using Dataflow ML.\n\n### Stack expects each tensor to be equal size\n\nIf you provide images of different sizes or word embeddings of different lengths\nwhen using the `RunInference` API, the following error might occur: \n\n File \"/beam/sdks/python/apache_beam/ml/inference/pytorch_inference.py\", line 232, in run_inference batched_tensors = torch.stack(key_to_tensor_list[key]) RuntimeError: stack expects each tensor to be equal size, but got [12] at entry 0 and [10] at entry 1 [while running 'PyTorchRunInference/ParDo(_RunInferenceDoFn)']\n\nThis error occurs because the `RunInference` API can't batch tensor elements of\ndifferent sizes. For workarounds, see\n[Unable to batch tensor elements](https://beam.apache.org/documentation/ml/about-ml/#unable-to-batch-tensor-elements)\nin the Apache Beam documentation.\n\n### Avoid out-of-memory errors with large models\n\nWhen you load a medium or large ML model, your machine might run out of memory.\nDataflow provides tools to help avoid out-of-memory (OOM) errors\nwhen loading ML models. Use the following table to determine the appropriate\napproach for your scenario.\n\nFor more information about memory management with Dataflow, see\n[Troubleshoot Dataflow out of memory errors](/dataflow/docs/guides/troubleshoot-oom).\n\nWhat's next\n-----------\n\n- Explore the [Dataflow ML notebooks](https://github.com/apache/beam/tree/master/examples/notebooks/beam-ml) in GitHub.\n- Get in-depth information about using ML with Apache Beam in Apache Beam's [AI/ML pipelines](https://beam.apache.org/documentation/ml/overview/) documentation.\n- Learn more about the [`RunInference` API](https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.html#apache_beam.ml.inference.RunInference).\n- Learn about the [metrics](https://beam.apache.org/documentation/ml/runinference-metrics/) that you can use to monitor your `RunInference` transform."]]