- 1.122.0 (latest)
- 1.121.0
- 1.120.0
- 1.119.0
- 1.118.0
- 1.117.0
- 1.116.0
- 1.115.0
- 1.114.0
- 1.113.0
- 1.112.0
- 1.111.0
- 1.110.0
- 1.109.0
- 1.108.0
- 1.107.0
- 1.106.0
- 1.105.0
- 1.104.0
- 1.103.0
- 1.102.0
- 1.101.0
- 1.100.0
- 1.99.0
- 1.98.0
- 1.97.0
- 1.96.0
- 1.95.1
- 1.94.0
- 1.93.1
- 1.92.0
- 1.91.0
- 1.90.0
- 1.89.0
- 1.88.0
- 1.87.0
- 1.86.0
- 1.85.0
- 1.84.0
- 1.83.0
- 1.82.0
- 1.81.0
- 1.80.0
- 1.79.0
- 1.78.0
- 1.77.0
- 1.76.0
- 1.75.0
- 1.74.0
- 1.73.0
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
Model(
    model_name: str,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
)Retrieves the model resource and instantiates its representation.
Parameters
| Name | Description | 
| model_name | strRequired. A fully-qualified model resource name or model ID. Example: "projects/123/locations/us-central1/models/456" or "456" when project and location are initialized or passed. | 
| project | strOptional project to retrieve model from. If not set, project set in aiplatform.init will be used. | 
| location | strOptional location to retrieve model from. If not set, location set in aiplatform.init will be used. | 
Inheritance
builtins.object > google.cloud.aiplatform.base.VertexAiResourceNoun > builtins.object > google.cloud.aiplatform.base.FutureManager > google.cloud.aiplatform.base.VertexAiResourceNounWithFutureManager > ModelProperties
container_spec
The specification of the container that is to be used when deploying this Model. Not present for AutoML Models.
description
Description of the model.
predict_schemata
The schemata that describe formats of the Model's predictions and explanations, if available.
supported_deployment_resources_types
List of deployment resource types accepted for this Model.
When this Model is deployed, its prediction resources are described by
the prediction_resources field of the objects returned by
Endpoint.list_models(). Because not all Models support all resource
configuration types, the configuration types this Model supports are
listed here.
If no configuration types are listed, the Model cannot be
deployed to an Endpoint and does not support online predictions
(Endpoint.predict() or Endpoint.explain()). Such a Model can serve
predictions by using a BatchPredictionJob, if it has at least one entry
each in Model.supported_input_storage_formats and
Model.supported_output_storage_formats.
supported_export_formats
The formats and content types in which this Model may be exported. If empty, this Model is not available for export.
For example, if this model can be exported as a Tensorflow SavedModel and have the artifacts written to Cloud Storage, the expected value would be:
{'tf-saved-model': [<ExportableContent.ARTIFACT: 1>]}
supported_input_storage_formats
The formats this Model supports in the input_config field of a
BatchPredictionJob. If Model.predict_schemata.instance_schema_uri
exists, the instances should be given as per that schema.
Read the docs for more on batch prediction formats
If this Model doesn't support any of these formats it means it cannot be
used with a BatchPredictionJob. However, if it has
supported_deployment_resources_types, it could serve online predictions
by using Endpoint.predict() or Endpoint.explain().
supported_output_storage_formats
The formats this Model supports in the output_config field of a
BatchPredictionJob.
If both Model.predict_schemata.instance_schema_uri and
Model.predict_schemata.prediction_schema_uri exist, the predictions
are returned together with their instances. In other words, the
prediction has the original instance data first, followed by the actual
prediction content (as per the schema).
Read the docs for more on batch prediction formats
If this Model doesn't support any of these formats it means it cannot be
used with a BatchPredictionJob. However, if it has
supported_deployment_resources_types, it could serve online predictions
by using Endpoint.predict() or Endpoint.explain().
training_job
The TrainingJob that uploaded this Model, if any.
| Type | Description | 
| api_core.exceptions.NotFound | If the Model's training job resource cannot be found on the Vertex service. | 
uri
Path to the directory containing the Model artifact and any of its supporting files. Not present for AutoML Models.
Methods
batch_predict
batch_predict(
    job_display_name: Optional[str] = None,
    gcs_source: Optional[Union[str, Sequence[str]]] = None,
    bigquery_source: Optional[str] = None,
    instances_format: str = "jsonl",
    gcs_destination_prefix: Optional[str] = None,
    bigquery_destination_prefix: Optional[str] = None,
    predictions_format: str = "jsonl",
    model_parameters: Optional[Dict] = None,
    machine_type: Optional[str] = None,
    accelerator_type: Optional[str] = None,
    accelerator_count: Optional[int] = None,
    starting_replica_count: Optional[int] = None,
    max_replica_count: Optional[int] = None,
    generate_explanation: Optional[bool] = False,
    explanation_metadata: Optional[
        google.cloud.aiplatform_v1.types.explanation_metadata.ExplanationMetadata
    ] = None,
    explanation_parameters: Optional[
        google.cloud.aiplatform_v1.types.explanation.ExplanationParameters
    ] = None,
    labels: Optional[Dict[str, str]] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
    encryption_spec_key_name: Optional[str] = None,
    sync: bool = True,
    create_request_timeout: Optional[float] = None,
    batch_size: Optional[int] = None,
)Creates a batch prediction job using this Model and outputs
prediction results to the provided destination prefix in the specified
predictions_format. One source and one destination prefix are
required.
Example usage:
my_model.batch_predict( job_display_name="prediction-123", gcs_source="gs://example-bucket/instances.csv", instances_format="csv", bigquery_destination_prefix="projectId.bqDatasetId.bqTableId" )
| Name | Description | 
| job_display_name | strOptional. The user-defined name of the BatchPredictionJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters. | 
| generate_explanation | boolOptional. Generate explanation along with the batch prediction results. This will cause the batch prediction output to include explanations based on the  | 
| explanation_metadata | explain.ExplanationMetadataOptional. Explanation metadata configuration for this BatchPredictionJob. Can be specified only if  | 
| explanation_parameters | explain.ExplanationParametersOptional. Parameters to configure explaining for Model's predictions. Can be specified only if  | 
| encryption_spec_key_name | Optional[str]Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the model. Has the form:  | 
| create_request_timeout | floatOptional. The timeout for the create request in seconds. | 
| batch_size | intOptional. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 64. | 
| Type | Description | 
| (jobs.BatchPredictionJob) | Instantiated representation of the created batch prediction job. | 
deploy
deploy(
    endpoint: Optional[google.cloud.aiplatform.models.Endpoint] = None,
    deployed_model_display_name: Optional[str] = None,
    traffic_percentage: Optional[int] = 0,
    traffic_split: Optional[Dict[str, int]] = None,
    machine_type: Optional[str] = None,
    min_replica_count: int = 1,
    max_replica_count: int = 1,
    accelerator_type: Optional[str] = None,
    accelerator_count: Optional[int] = None,
    service_account: Optional[str] = None,
    explanation_metadata: Optional[
        google.cloud.aiplatform_v1.types.explanation_metadata.ExplanationMetadata
    ] = None,
    explanation_parameters: Optional[
        google.cloud.aiplatform_v1.types.explanation.ExplanationParameters
    ] = None,
    metadata: Optional[Sequence[Tuple[str, str]]] = (),
    encryption_spec_key_name: Optional[str] = None,
    sync=True,
    deploy_request_timeout: Optional[float] = None,
    autoscaling_target_cpu_utilization: Optional[int] = None,
    autoscaling_target_accelerator_duty_cycle: Optional[int] = None,
)Deploys model to endpoint. Endpoint will be created if unspecified.
| Name | Description | 
| endpoint | "Endpoint"Optional. Endpoint to deploy model to. If not specified, endpoint display name will be model display name+'_endpoint'. | 
| deployed_model_display_name | strOptional. The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used. | 
| traffic_percentage | intOptional. Desired traffic to newly deployed model. Defaults to 0 if there are pre-existing deployed models. Defaults to 100 if there are no pre-existing deployed models. Negative values should not be provided. Traffic of previously deployed models at the endpoint will be scaled down to accommodate new deployed model's traffic. Should not be provided if traffic_split is provided. | 
| traffic_split | Dict[str, int]Optional. A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at the moment. Key for model being deployed is "0". Should not be provided if traffic_percentage is provided. | 
| machine_type | strOptional. The type of machine. Not specifying machine type will result in model to be deployed with automatic resources. | 
| min_replica_count | intOptional. The minimum number of machine replicas this deployed model will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. | 
| max_replica_count | intOptional. The maximum number of replicas this deployed model may be deployed on when the traffic against it increases. If requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the deployed model increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, the smaller value of min_replica_count or 1 will be used. | 
| accelerator_type | strOptional. Hardware accelerator type. Must also set accelerator_count if used. One of ACCELERATOR_TYPE_UNSPECIFIED, NVIDIA_TESLA_K80, NVIDIA_TESLA_P100, NVIDIA_TESLA_V100, NVIDIA_TESLA_P4, NVIDIA_TESLA_T4 | 
| accelerator_count | intOptional. The number of accelerators to attach to a worker replica. | 
| service_account | strThe service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the  | 
| explanation_metadata | explain.ExplanationMetadataOptional. Metadata describing the Model's input and output for explanation. Both  | 
| explanation_parameters | explain.ExplanationParametersOptional. Parameters to configure explaining for Model's predictions. For more details, see  | 
| metadata | Sequence[Tuple[str, str]]Optional. Strings which should be sent along with the request as metadata. | 
| encryption_spec_key_name | Optional[str]Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the model. Has the form:  | 
| deploy_request_timeout | floatOptional. The timeout for the deploy request in seconds. | 
| autoscaling_target_cpu_utilization | intOptional. Target CPU Utilization to use for Autoscaling Replicas. A default value of 60 will be used if not specified. | 
| autoscaling_target_accelerator_duty_cycle | intOptional. Target Accelerator Duty Cycle. Must also set accelerator_type and accelerator_count if specified. A default value of 60 will be used if not specified. | 
| sync | boolWhether to execute this method synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed. | 
| Type | Description | 
| endpoint ("Endpoint") | Endpoint with the deployed model. | 
export_model
export_model(
    export_format_id: str,
    artifact_destination: Optional[str] = None,
    image_destination: Optional[str] = None,
    sync: bool = True,
)Exports a trained, exportable Model to a location specified by the user.
A Model is considered to be exportable if it has at least one supported_export_formats.
Either artifact_destination or image_destination must be provided.
Usage: my_model.export( export_format_id='tf-saved-model' artifact_destination='gs://my-bucket/models/' )
or
my_model.export(
    export_format_id='custom-model'
    image_destination='us-central1-docker.pkg.dev/projectId/repo/image'
)
| Name | Description | 
| export_format_id | strRequired. The ID of the format in which the Model must be exported. The list of export formats that this Model supports can be found by calling  | 
| artifact_destination | strThe Cloud Storage location where the Model artifact is to be written to. Under the directory given as the destination a new one with name " | 
| image_destination | strThe Google Container Registry or Artifact Registry URI where the Model container image will be copied to. Accepted forms: - Google Container Registry path. For example:  | 
| sync | boolWhether to execute this export synchronously. If False, this method will be executed in concurrent Future and any downstream object will be immediately returned and synced when the Future has completed. | 
| Type | Description | 
| ValueError | If model does not support exporting. | 
| ValueError | If invalid arguments or export formats are provided. | 
| Type | Description | 
| output_info (Dict[str, str]) | Details of the completed export with output destination paths to the artifacts or container image. | 
get_model_evaluation
get_model_evaluation(evaluation_id: Optional[str] = None)Returns a ModelEvaluation resource and instantiates its representation. If no evaluation_id is passed, it will return the first evaluation associated with this model.
Example usage:
my_model = Model(
    model_name="projects/123/locations/us-central1/models/456"
)
my_evaluation = my_model.get_model_evaluation(
    evaluation_id="789"
)
# If no arguments are passed, this returns the first evaluation for the model
my_evaluation = my_model.get_model_evaluation()
| Name | Description | 
| evaluation_id | strOptional. The ID of the model evaluation to retrieve. | 
| Type | Description | 
| model_evaluation.ModelEvaluation | Instantiated representation of the ModelEvaluation resource. | 
list
list(
    filter: Optional[str] = None,
    order_by: Optional[str] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
)List all Model resource instances.
Example Usage:
aiplatform.Model.list( filter='labels.my_label="my_label_value" AND display_name="my_model"', )
| Name | Description | 
| filter | strOptional. An expression for filtering the results of the request. For field names both snake_case and camelCase are supported. | 
| order_by | strOptional. A comma-separated list of fields to order by, sorted in ascending order. Use "desc" after a field name for descending. Supported fields:  | 
| project | strOptional. Project to retrieve list from. If not set, project set in aiplatform.init will be used. | 
| location | strOptional. Location to retrieve list from. If not set, location set in aiplatform.init will be used. | 
| credentials | auth_credentials.CredentialsOptional. Custom credentials to use to retrieve list. Overrides credentials set in aiplatform.init. | 
list_model_evaluations
list_model_evaluations()List all Model Evaluation resources associated with this model.
Example Usage:
my_model = Model( model_name="projects/123/locations/us-central1/models/456" )
my_evaluations = my_model.list_model_evaluations()
| Type | Description | 
| List[model_evaluation.ModelEvaluation] | List of ModelEvaluation resources for the model. | 
update
update(
    display_name: Optional[str] = None,
    description: Optional[str] = None,
    labels: Optional[Dict[str, str]] = None,
)Updates a model.
Example usage:
my_model = my_model.update( display_name='my-model', description='my description', labels={'key': 'value'}, )
| Name | Description | 
| display_name | strThe display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters. | 
| description | strThe description of the model. | 
| labels | Dict[str, str]Optional. The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. | 
| Type | Description | 
| ValueError | If `labels` is not the correct format. | 
| Type | Description | 
| model | Updated model resource. | 
upload
upload(
    serving_container_image_uri: str,
    *,
    artifact_uri: Optional[str] = None,
    serving_container_predict_route: Optional[str] = None,
    serving_container_health_route: Optional[str] = None,
    description: Optional[str] = None,
    serving_container_command: Optional[Sequence[str]] = None,
    serving_container_args: Optional[Sequence[str]] = None,
    serving_container_environment_variables: Optional[Dict[str, str]] = None,
    serving_container_ports: Optional[Sequence[int]] = None,
    instance_schema_uri: Optional[str] = None,
    parameters_schema_uri: Optional[str] = None,
    prediction_schema_uri: Optional[str] = None,
    explanation_metadata: Optional[
        google.cloud.aiplatform_v1.types.explanation_metadata.ExplanationMetadata
    ] = None,
    explanation_parameters: Optional[
        google.cloud.aiplatform_v1.types.explanation.ExplanationParameters
    ] = None,
    display_name: Optional[str] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
    labels: Optional[Dict[str, str]] = None,
    encryption_spec_key_name: Optional[str] = None,
    staging_bucket: Optional[str] = None,
    sync=True,
    upload_request_timeout: Optional[float] = None
)Uploads a model and returns a Model representing the uploaded Model resource.
Example usage:
my_model = Model.upload( display_name='my-model', artifact_uri='gs://my-model/saved-model' serving_container_image_uri='tensorflow/serving' )
| Name | Description | 
| display_name | strOptional. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters. | 
| serving_container_image_uri | strRequired. The URI of the Model serving container. | 
| artifact_uri | strOptional. The path to the directory containing the Model artifact and any of its supporting files. Leave blank for custom container prediction. Not present for AutoML Models. | 
| serving_container_predict_route | strOptional. An HTTP path to send prediction requests to the container, and which must be supported by it. If not specified a default HTTP path will be used by Vertex AI. | 
| serving_container_health_route | strOptional. An HTTP path to send health check requests to the container, and which must be supported by it. If not specified a standard HTTP path will be used by Vertex AI. | 
| description | strThe description of the model. | 
| instance_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the format of a single instance, which are used in  | 
| parameters_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the parameters of prediction and explanation via  | 
| prediction_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the format of a single prediction produced by this Model, which are returned via  | 
| explanation_metadata | explain.ExplanationMetadataOptional. Metadata describing the Model's input and output for explanation. Both  | 
| explanation_parameters | explain.ExplanationParametersOptional. Parameters to configure explaining for Model's predictions. For more details, see  | 
| labels | Dict[str, str]Optional. The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. | 
| encryption_spec_key_name | Optional[str]Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the model. Has the form:  | 
| staging_bucket | strOptional. Bucket to stage local model artifacts. Overrides staging_bucket set in aiplatform.init. | 
| upload_request_timeout | floatOptional. The timeout for the upload request in seconds. | 
| Type | Description | 
| ValueError | If only `explanation_metadata` or `explanation_parameters` is specified. Also if model directory does not contain a supported model file. | 
| Type | Description | 
| model | Instantiated representation of the uploaded model resource. | 
upload_scikit_learn_model_file
upload_scikit_learn_model_file(
    model_file_path: str,
    sklearn_version: Optional[str] = None,
    display_name: Optional[str] = None,
    description: Optional[str] = None,
    instance_schema_uri: Optional[str] = None,
    parameters_schema_uri: Optional[str] = None,
    prediction_schema_uri: Optional[str] = None,
    explanation_metadata: Optional[
        google.cloud.aiplatform_v1.types.explanation_metadata.ExplanationMetadata
    ] = None,
    explanation_parameters: Optional[
        google.cloud.aiplatform_v1.types.explanation.ExplanationParameters
    ] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
    labels: Optional[Dict[str, str]] = None,
    encryption_spec_key_name: Optional[str] = None,
    staging_bucket: Optional[str] = None,
    sync=True,
    upload_request_timeout: Optional[float] = None,
)Uploads a model and returns a Model representing the uploaded Model resource.
Note: This function is experimental and can be changed in the future.
Example usage::
my_model = Model.upload_scikit_learn_model_file(
    model_file_path="iris.sklearn_model.joblib"
)
| Name | Description | 
| model_file_path | strRequired. Local file path of the model. | 
| sklearn_version | strOptional. The version of the Scikit-learn serving container. Supported versions: ["0.20", "0.22", "0.23", "0.24", "1.0"]. If the version is not specified, the latest version is used. | 
| display_name | strOptional. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters. | 
| description | strThe description of the model. | 
| instance_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the format of a single instance, which are used in  | 
| parameters_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the parameters of prediction and explanation via  | 
| prediction_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the format of a single prediction produced by this Model, which are returned via  | 
| explanation_metadata | explain.ExplanationMetadataOptional. Metadata describing the Model's input and output for explanation. Both  | 
| explanation_parameters | explain.ExplanationParametersOptional. Parameters to configure explaining for Model's predictions. For more details, see  | 
| labels | Dict[str, str]Optional. The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. | 
| encryption_spec_key_name | Optional[str]Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the model. Has the form:  | 
| staging_bucket | strOptional. Bucket to stage local model artifacts. Overrides staging_bucket set in aiplatform.init. | 
| upload_request_timeout | floatOptional. The timeout for the upload request in seconds. | 
| Type | Description | 
| ValueError | If only `explanation_metadata` or `explanation_parameters` is specified. Also if model directory does not contain a supported model file. | 
| Type | Description | 
| model | Instantiated representation of the uploaded model resource. | 
upload_tensorflow_saved_model
upload_tensorflow_saved_model(
    saved_model_dir: str,
    tensorflow_version: Optional[str] = None,
    use_gpu: bool = False,
    display_name: Optional[str] = None,
    description: Optional[str] = None,
    instance_schema_uri: Optional[str] = None,
    parameters_schema_uri: Optional[str] = None,
    prediction_schema_uri: Optional[str] = None,
    explanation_metadata: Optional[
        google.cloud.aiplatform_v1.types.explanation_metadata.ExplanationMetadata
    ] = None,
    explanation_parameters: Optional[
        google.cloud.aiplatform_v1.types.explanation.ExplanationParameters
    ] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
    labels: Optional[Dict[str, str]] = None,
    encryption_spec_key_name: Optional[str] = None,
    staging_bucket: Optional[str] = None,
    sync=True,
    upload_request_timeout: Optional[str] = None,
)Uploads a model and returns a Model representing the uploaded Model resource.
Note: This function is experimental and can be changed in the future.
Example usage::
my_model = Model.upload_scikit_learn_model_file(
    model_file_path="iris.tensorflow_model.SavedModel"
)
| Name | Description | 
| saved_model_dir | strRequired. Local directory of the Tensorflow SavedModel. | 
| tensorflow_version | strOptional. The version of the Tensorflow serving container. Supported versions: ["0.15", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "2.7"]. If the version is not specified, the latest version is used. | 
| use_gpu | boolWhether to use GPU for model serving. | 
| display_name | strOptional. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters. | 
| description | strThe description of the model. | 
| instance_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the format of a single instance, which are used in  | 
| parameters_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the parameters of prediction and explanation via  | 
| prediction_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the format of a single prediction produced by this Model, which are returned via  | 
| explanation_metadata | explain.ExplanationMetadataOptional. Metadata describing the Model's input and output for explanation. Both  | 
| explanation_parameters | explain.ExplanationParametersOptional. Parameters to configure explaining for Model's predictions. For more details, see  | 
| labels | Dict[str, str]Optional. The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. | 
| encryption_spec_key_name | Optional[str]Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the model. Has the form:  | 
| staging_bucket | strOptional. Bucket to stage local model artifacts. Overrides staging_bucket set in aiplatform.init. | 
| upload_request_timeout | floatOptional. The timeout for the upload request in seconds. | 
| Type | Description | 
| ValueError | If only `explanation_metadata` or `explanation_parameters` is specified. Also if model directory does not contain a supported model file. | 
| Type | Description | 
| model | Instantiated representation of the uploaded model resource. | 
upload_xgboost_model_file
upload_xgboost_model_file(
    model_file_path: str,
    xgboost_version: Optional[str] = None,
    display_name: Optional[str] = None,
    description: Optional[str] = None,
    instance_schema_uri: Optional[str] = None,
    parameters_schema_uri: Optional[str] = None,
    prediction_schema_uri: Optional[str] = None,
    explanation_metadata: Optional[
        google.cloud.aiplatform_v1.types.explanation_metadata.ExplanationMetadata
    ] = None,
    explanation_parameters: Optional[
        google.cloud.aiplatform_v1.types.explanation.ExplanationParameters
    ] = None,
    project: Optional[str] = None,
    location: Optional[str] = None,
    credentials: Optional[google.auth.credentials.Credentials] = None,
    labels: Optional[Dict[str, str]] = None,
    encryption_spec_key_name: Optional[str] = None,
    staging_bucket: Optional[str] = None,
    sync=True,
    upload_request_timeout: Optional[float] = None,
)Uploads a model and returns a Model representing the uploaded Model resource.
Note: This function is experimental and can be changed in the future.
Example usage::
my_model = Model.upload_xgboost_model_file(
    model_file_path="iris.xgboost_model.bst"
)
| Name | Description | 
| model_file_path | strRequired. Local file path of the model. | 
| xgboost_version | strOptional. The version of the XGBoost serving container. Supported versions: ["0.82", "0.90", "1.1", "1.2", "1.3", "1.4"]. If the version is not specified, the latest version is used. | 
| display_name | strOptional. The display name of the Model. The name can be up to 128 characters long and can be consist of any UTF-8 characters. | 
| description | strThe description of the model. | 
| instance_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the format of a single instance, which are used in  | 
| parameters_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the parameters of prediction and explanation via  | 
| prediction_schema_uri | strOptional. Points to a YAML file stored on Google Cloud Storage describing the format of a single prediction produced by this Model, which are returned via  | 
| explanation_metadata | explain.ExplanationMetadataOptional. Metadata describing the Model's input and output for explanation. Both  | 
| explanation_parameters | explain.ExplanationParametersOptional. Parameters to configure explaining for Model's predictions. For more details, see  | 
| labels | Dict[str, str]Optional. The labels with user-defined metadata to organize your Models. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. | 
| encryption_spec_key_name | Optional[str]Optional. The Cloud KMS resource identifier of the customer managed encryption key used to protect the model. Has the form:  | 
| staging_bucket | strOptional. Bucket to stage local model artifacts. Overrides staging_bucket set in aiplatform.init. | 
| upload_request_timeout | floatOptional. The timeout for the upload request in seconds. | 
| Type | Description | 
| ValueError | If only `explanation_metadata` or `explanation_parameters` is specified. Also if model directory does not contain a supported model file. | 
| Type | Description | 
| model | Instantiated representation of the uploaded model resource. |