Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see Model versions and lifecycle.
Batch predictions are a good option for large volumes of non-latency-sensitive embeddings requests. Key features of batch predictions include:
Large volume: Process a large number of requests in a single batch job instead of one at a time.
Asynchronous processing: Similar to batch prediction for tabular data in Vertex AI, you specify an output location for your results, and the job populates it asynchronously.
Text embeddings models that support batch predictions
All stable versions of text embedding models support batch predictions. Stable versions are versions that are no longer in preview and are fully supported for production environments. To see the full list of supported embedding models, see Embedding model and versions.
Choose an input source
Before you prepare your inputs, decide whether to use JSONL files in Cloud Storage or a BigQuery table. The following table provides a comparison to help you choose the best option for your use case.
Input Source
Description
Use Case
JSONL file in Cloud Storage
A text file where each line is a separate JSON object that contains a prompt.
Use this option when your source data is in files or if you prefer a file-based data pipeline.
BigQuery table
A structured table in BigQuery with a column that contains the prompts.
Use this option when your prompts are stored in BigQuery or are part of a larger structured dataset.
Prepare your inputs
The input for batch requests is a list of prompts stored in either a BigQuery table or a JSON Lines (JSONL) file in Cloud Storage. Each batch request can include up to 30,000 prompts.
JSONL format
Input example
Each line in the input file must be a valid JSON object with a content field that contains the prompt.
{"content":"Give a short description of a machine learning model:"}{"content":"Best recipe for banana bread:"}
Output example
The output is written to a JSONL file where each line contains the instance, the corresponding prediction, and a status.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-27 UTC."],[],[],null,["# Get batch text embeddings predictions\n\nGetting responses in a batch is a way to efficiently send large numbers of non-latency\nsensitive embeddings requests. Different from getting online responses,\nwhere you are limited to one input request at a time, you can send a large number\nof LLM requests in a single batch request. Similar to how batch prediction is done\nfor [tabular data in Vertex AI](/vertex-ai/docs/tabular-data/classification-regression/get-batch-predictions),\nyou determine your output location, add your input, and your responses asynchronously\npopulate into your output location.\n\nText embeddings models that support batch predictions\n-----------------------------------------------------\n\nAll stable versions of text embedding models support batch predictions. Stable\nversions are versions which are no longer in preview and are fully supported for\nproduction environments. To see the full list of supported embedding models, see\n[Embedding model and versions](/vertex-ai/generative-ai/docs/learn/model-versioning#embedding_models_and_versions).\n\nPrepare your inputs\n-------------------\n\nThe input for batch requests are a list of prompts that can either be stored in\na BigQuery table or as a\n[JSON Lines (JSONL)](https://jsonlines.org/) file in\nCloud Storage. Each request can include up to 30,000 prompts.\n\n### JSONL example\n\nThis section shows examples of how to format JSONL input and output.\n\n#### JSONL input example\n\n {\"content\":\"Give a short description of a machine learning model:\"}\n {\"content\":\"Best recipe for banana bread:\"}\n\n#### JSONL output example\n\n {\"instance\":{\"content\":\"Give...\"},\"predictions\": [{\"embeddings\":{\"statistics\":{\"token_count\":8,\"truncated\":false},\"values\":[0.2,....]}}],\"status\":\"\"}\n {\"instance\":{\"content\":\"Best...\"},\"predictions\": [{\"embeddings\":{\"statistics\":{\"token_count\":3,\"truncated\":false},\"values\":[0.1,....]}}],\"status\":\"\"}\n\n### BigQuery example\n\nThis section shows examples of how to format BigQuery input and output.\n\n#### BigQuery input example\n\nThis example shows a single column BigQuery table.\n\n#### BigQuery output example\n\nRequest a batch response\n------------------------\n\nDepending on the number of input items that you've submitted, a\nbatch generation task can take some time to complete. \n\n### REST\n\nTo test a text prompt by using the Vertex AI API, send a POST request to the\npublisher model endpoint.\n\n\nBefore using any of the request data,\nmake the following replacements:\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The ID of your Google Cloud project.\n- \u003cvar translate=\"no\"\u003eBP_JOB_NAME\u003c/var\u003e: The job name.\n- \u003cvar translate=\"no\"\u003eINPUT_URI\u003c/var\u003e: The input source URI. This is either a BigQuery table URI or a JSONL file URI in Cloud Storage.\n- \u003cvar translate=\"no\"\u003eOUTPUT_URI\u003c/var\u003e: Output target URI.\n\n\nHTTP method and URL:\n\n```\nPOST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs\n```\n\n\nRequest JSON body:\n\n```\n{\n \"name\": \"BP_JOB_NAME\",\n \"displayName\": \"BP_JOB_NAME\",\n \"model\": \"publishers/google/models/textembedding-gecko\",\n \"inputConfig\": {\n \"instancesFormat\":\"bigquery\",\n \"bigquerySource\":{\n \"inputUri\" : \"INPUT_URI\"\n }\n },\n \"outputConfig\": {\n \"predictionsFormat\":\"bigquery\",\n \"bigqueryDestination\":{\n \"outputUri\": \"OUTPUT_URI\"\n }\n }\n}\n\n```\n\nTo send your request, choose one of these options: \n\n#### curl\n\n| **Note:** The following command assumes that you have logged in to the `gcloud` CLI with your user account by running [`gcloud init`](/sdk/gcloud/reference/init) or [`gcloud auth login`](/sdk/gcloud/reference/auth/login) , or by using [Cloud Shell](/shell/docs), which automatically logs you into the `gcloud` CLI . You can check the currently active account by running [`gcloud auth list`](/sdk/gcloud/reference/auth/list).\n\n\nSave the request body in a file named `request.json`,\nand execute the following command:\n\n```\ncurl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json; charset=utf-8\" \\\n -d @request.json \\\n \"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs\"\n```\n\n#### PowerShell\n\n| **Note:** The following command assumes that you have logged in to the `gcloud` CLI with your user account by running [`gcloud init`](/sdk/gcloud/reference/init) or [`gcloud auth login`](/sdk/gcloud/reference/auth/login) . You can check the currently active account by running [`gcloud auth list`](/sdk/gcloud/reference/auth/list).\n\n\nSave the request body in a file named `request.json`,\nand execute the following command:\n\n```\n$cred = gcloud auth print-access-token\n$headers = @{ \"Authorization\" = \"Bearer $cred\" }\n\nInvoke-WebRequest `\n -Method POST `\n -Headers $headers `\n -ContentType: \"application/json; charset=utf-8\" `\n -InFile request.json `\n -Uri \"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs\" | Select-Object -Expand Content\n```\n\nYou should receive a JSON response similar to the following:\n\n```\n{\n \"name\": \"projects/123456789012/locations/us-central1/batchPredictionJobs/1234567890123456789\",\n \"displayName\": \"BP_sample_publisher_BQ_20230712_134650\",\n \"model\": \"projects/{PROJECT_ID}/locations/us-central1/models/textembedding-gecko\",\n \"inputConfig\": {\n \"instancesFormat\": \"bigquery\",\n \"bigquerySource\": {\n \"inputUri\": \"bq://project_name.dataset_name.text_input\"\n }\n },\n \"modelParameters\": {},\n \"outputConfig\": {\n \"predictionsFormat\": \"bigquery\",\n \"bigqueryDestination\": {\n \"outputUri\": \"bq://project_name.llm_dataset.embedding_out_BP_sample_publisher_BQ_20230712_134650\"\n }\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2023-07-12T20:46:52.148717Z\",\n \"updateTime\": \"2023-07-12T20:46:52.148717Z\",\n \"labels\": {\n \"owner\": \"sample_owner\",\n \"product\": \"llm\"\n },\n \"modelVersionId\": \"1\",\n \"modelMonitoringStatus\": {}\n}\n```\n\nThe response includes a unique identifier for the batch job.\nYou can poll for the status of the batch job using\nthe \u003cvar translate=\"no\"\u003eBATCH_JOB_ID\u003c/var\u003e until the job `state` is\n`JOB_STATE_SUCCEEDED`. For example: \n\n```bash\ncurl \\\n -X GET \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json\" \\\nhttps://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/batchPredictionJobs/BATCH_JOB_ID\n```\n| **Note:** You can run only one batch response job at a time. Custom Service accounts, live progress, CMEK, and VPC-SC reports aren't supported at this time.\n\n### Python\n\n#### Install\n\n```\npip install --upgrade google-genai\n```\n\n\nTo learn more, see the\n[SDK reference documentation](https://googleapis.github.io/python-genai/).\n\n\nSet environment variables to use the Gen AI SDK with Vertex AI:\n\n```bash\n# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values\n# with appropriate values for your project.\nexport GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT\nexport GOOGLE_CLOUD_LOCATION=us-central1\nexport GOOGLE_GENAI_USE_VERTEXAI=True\n```\n\n\u003cbr /\u003e\n\n import time\n\n from google import genai\n from google.genai.types import CreateBatchJobConfig, JobState, HttpOptions\n\n client = genai.Client(http_options=HttpOptions(api_version=\"v1\"))\n # TODO(developer): Update and un-comment below line\n # output_uri = \"gs://your-bucket/your-prefix\"\n\n # See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.batches.Batches.create\n job = client.batches.create(\n model=\"text-embedding-005\",\n # Source link: https://storage.cloud.google.com/cloud-samples-data/generative-ai/embeddings/embeddings_input.jsonl\n src=\"gs://cloud-samples-data/generative-ai/embeddings/embeddings_input.jsonl\",\n config=CreateBatchJobConfig(dest=output_uri),\n )\n print(f\"Job name: {job.name}\")\n print(f\"Job state: {job.state}\")\n # Example response:\n # Job name: projects/%PROJECT_ID%/locations/us-central1/batchPredictionJobs/9876453210000000000\n # Job state: JOB_STATE_PENDING\n\n # See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.types.BatchJob\n completed_states = {\n JobState.JOB_STATE_SUCCEEDED,\n JobState.JOB_STATE_FAILED,\n JobState.JOB_STATE_CANCELLED,\n JobState.JOB_STATE_PAUSED,\n }\n\n while job.state not in completed_states:\n time.sleep(30)\n job = client.batches.get(name=job.name)\n print(f\"Job state: {job.state}\")\n if job.state == JobState.JOB_STATE_FAILED:\n print(f\"Error: {job.error}\")\n break\n\n # Example response:\n # Job state: JOB_STATE_PENDING\n # Job state: JOB_STATE_RUNNING\n # Job state: JOB_STATE_RUNNING\n # ...\n # Job state: JOB_STATE_SUCCEEDED\n\n\u003cbr /\u003e\n\nRetrieve batch output\n---------------------\n\nWhen a batch prediction task is complete, the output is stored\nin the Cloud Storage bucket or BigQuery table that you specified\nin your request.\n\nWhat's next\n-----------\n\n- Learn how to [get text embeddings](/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings)."]]