- 2.27.0 (latest)
 - 2.26.0
 - 2.25.0
 - 2.24.0
 - 2.23.0
 - 2.22.0
 - 2.21.0
 - 2.20.0
 - 2.19.0
 - 2.18.0
 - 2.17.0
 - 2.16.0
 - 2.15.0
 - 2.14.0
 - 2.13.0
 - 2.12.0
 - 2.11.0
 - 2.10.0
 - 2.9.0
 - 2.8.0
 - 2.7.0
 - 2.6.0
 - 2.5.0
 - 2.4.0
 - 2.3.0
 - 2.2.0
 - 1.36.0
 - 1.35.0
 - 1.34.0
 - 1.33.0
 - 1.32.0
 - 1.31.0
 - 1.30.0
 - 1.29.0
 - 1.28.0
 - 1.27.0
 - 1.26.0
 - 1.25.0
 - 1.24.0
 - 1.22.0
 - 1.21.0
 - 1.20.0
 - 1.19.0
 - 1.18.0
 - 1.17.0
 - 1.16.0
 - 1.15.0
 - 1.14.0
 - 1.13.0
 - 1.12.0
 - 1.11.1
 - 1.10.0
 - 1.9.0
 - 1.8.0
 - 1.7.0
 - 1.6.0
 - 1.5.0
 - 1.4.0
 - 1.3.0
 - 1.2.0
 - 1.1.0
 - 1.0.0
 - 0.26.0
 - 0.25.0
 - 0.24.0
 - 0.23.0
 - 0.22.0
 - 0.21.0
 - 0.20.1
 - 0.19.2
 - 0.18.0
 - 0.17.0
 - 0.16.0
 - 0.15.0
 - 0.14.1
 - 0.13.0
 - 0.12.0
 - 0.11.0
 - 0.10.0
 - 0.9.0
 - 0.8.0
 - 0.7.0
 - 0.6.0
 - 0.5.0
 - 0.4.0
 - 0.3.0
 - 0.2.0
 
LLM models.
Classes
Claude3TextGenerator
Claude3TextGenerator(
    *,
    model_name: typing.Optional[
        typing.Literal[
            "claude-3-sonnet", "claude-3-haiku", "claude-3-5-sonnet", "claude-3-opus"
        ]
    ] = None,
    session: typing.Optional[bigframes.session.Session] = None,
    connection_name: typing.Optional[str] = None
)Claude3 text generator LLM model.
Go to Google Cloud Console -> Vertex AI -> Model Garden page to enable the models before use. Must have the Consumer Procurement Entitlement Manager Identity and Access Management (IAM) role to enable the models. https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-partner-models#grant-permissions
The models only available in specific regions. Check https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude#regions for details.| Parameters | |
|---|---|
| Name | Description | 
model_name | 
        
          str, Default to "claude-3-sonnet"
          The model for natural language tasks. Possible values are "claude-3-sonnet", "claude-3-haiku", "claude-3-5-sonnet" and "claude-3-opus". "claude-3-sonnet" (deprecated) is Anthropic's dependable combination of skills and speed. It is engineered to be dependable for scaled AI deployments across a variety of use cases. "claude-3-haiku" is Anthropic's fastest, most compact vision and text model for near-instant responses to simple queries, meant for seamless AI experiences mimicking human interactions. "claude-3-5-sonnet" is Anthropic's most powerful AI model and maintains the speed and cost of Claude 3 Sonnet, which is a mid-tier model. "claude-3-opus" is Anthropic's second-most powerful AI model, with strong performance on highly complex tasks. https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude#available-claude-models If no setting is provided, "claude-3-sonnet" will be used by default and a warning will be issued.  | 
      
session | 
        
          bigframes.Session or None
          BQ session to create the model. If None, use the global default session.  | 
      
connection_name | 
        
          str or None
          Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>. 
  | 
      
GeminiTextGenerator
GeminiTextGenerator(
    *,
    model_name: typing.Optional[
        typing.Literal[
            "gemini-1.5-pro-preview-0514",
            "gemini-1.5-flash-preview-0514",
            "gemini-1.5-pro-001",
            "gemini-1.5-pro-002",
            "gemini-1.5-flash-001",
            "gemini-1.5-flash-002",
            "gemini-2.0-flash-exp",
            "gemini-2.0-flash-001",
            "gemini-2.0-flash-lite-001",
        ]
    ] = None,
    session: typing.Optional[bigframes.session.Session] = None,
    connection_name: typing.Optional[str] = None,
    max_iterations: int = 300
)Gemini text generator LLM model.
| Parameters | |
|---|---|
| Name | Description | 
model_name | 
        
          str, Default to "gemini-2.0-flash-001"
          The model for natural language tasks. Accepted values are "gemini-1.5-pro-preview-0514", "gemini-1.5-flash-preview-0514", "gemini-1.5-pro-001", "gemini-1.5-pro-002", "gemini-1.5-flash-001", "gemini-1.5-flash-002", "gemini-2.0-flash-exp", "gemini-2.0-flash-lite-001", and "gemini-2.0-flash-001". If no setting is provided, "gemini-2.0-flash-001" will be used by default and a warning will be issued.  | 
      
session | 
        
          bigframes.Session or None
          BQ session to create the model. If None, use the global default session.  | 
      
connection_name | 
        
          str or None
          Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>. 
  | 
      
max_iterations | 
        
          Optional[int], Default to 300
          The number of steps to run when performing supervised tuning.  | 
      
MultimodalEmbeddingGenerator
MultimodalEmbeddingGenerator(
    *,
    model_name: typing.Optional[typing.Literal["multimodalembedding@001"]] = None,
    session: typing.Optional[bigframes.session.Session] = None,
    connection_name: typing.Optional[str] = None
)Multimodal embedding generator LLM model.
| Parameters | |
|---|---|
| Name | Description | 
model_name | 
        
          str, Default to "multimodalembedding@001"
          The model for multimodal embedding. Can set to "multimodalembedding@001". Multimodal-embedding models returns model embeddings for text, image and video inputs. If no setting is provided, "multimodalembedding@001" will be used by default and a warning will be issued.  | 
      
session | 
        
          bigframes.Session or None
          BQ session to create the model. If None, use the global default session.  | 
      
connection_name | 
        
          str or None
          Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>. 
  | 
      
TextEmbeddingGenerator
TextEmbeddingGenerator(
    *,
    model_name: typing.Optional[
        typing.Literal[
            "text-embedding-005",
            "text-embedding-004",
            "text-multilingual-embedding-002",
        ]
    ] = None,
    session: typing.Optional[bigframes.session.Session] = None,
    connection_name: typing.Optional[str] = None
)Text embedding generator LLM model.
| Parameters | |
|---|---|
| Name | Description | 
model_name | 
        
          str, Default to "text-embedding-004"
          The model for text embedding. Possible values are "text-embedding-005", "text-embedding-004" or "text-multilingual-embedding-002". text-embedding models returns model embeddings for text inputs. text-multilingual-embedding models returns model embeddings for text inputs which support over 100 languages. If no setting is provided, "text-embedding-004" will be used by default and a warning will be issued.  | 
      
session | 
        
          bigframes.Session or None
          BQ session to create the model. If None, use the global default session.  | 
      
connection_name | 
        
          str or None
          Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>. 
  |