Get trained models usage info Generally available

GET /_ml/trained_models/{model_id}/_stats

All methods and paths for this operation:

GET /_ml/trained_models/_stats

GET /_ml/trained_models/{model_id}/_stats

You can get usage information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • model_id string | array[string] Required

    The unique identifier of the trained model or a model alias. It can be a comma-separated list or a wildcard expression.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no models that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, it returns an empty array when there are no matches and the subset of results when there are partial matches.

  • from number

    Skips the specified number of models.

  • size number

    Specifies the maximum number of models to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required

      The total number of trained model statistics that matched the requested ID patterns. Could be higher than the number of items in the trained_model_stats array as the size of the array is restricted by the supplied size parameter.

    • trained_model_stats array[object] Required

      An array of trained model statistics, which are sorted by the model_id value in ascending order.

      Hide trained_model_stats attributes Show trained_model_stats attributes object
      • deployment_stats object

        A collection of deployment stats, which is present when the models are deployed.

        Hide deployment_stats attributes Show deployment_stats attributes object
        • adaptive_allocations object
        • allocation_status object

          The detailed allocation status for the deployment.

        • cache_size
        • deployment_id string Required

          The unique identifier for the trained model deployment.

        • error_count number

          The sum of error_count for all nodes in the deployment.

        • inference_count number

          The sum of inference_count for all nodes in the deployment.

        • model_id string Required

          The unique identifier for the trained model.

        • nodes array[object] Required

          The deployment stats for each node that currently has the model allocated. In serverless, stats are reported for a single unnamed virtual node.

        • number_of_allocations number

          The number of allocations requested.

        • peak_throughput_per_minute number Required
        • priority string Required

          Values are normal or low.

        • queue_capacity number

          The number of inference requests that can be queued before new requests are rejected.

        • rejected_execution_count number

          The sum of rejected_execution_count for all nodes in the deployment. Individual nodes reject an inference request if the inference queue is full. The queue size is controlled by the queue_capacity setting in the start trained model deployment API.

        • reason string

          The reason for the current deployment state. Usually only populated when the model is not deployed to a node.

        • state string

          The overall state of the deployment.

          Supported values include:

          • started: The deployment is usable; at least one node has the model allocated.
          • starting: The deployment has recently started but is not yet usable; the model is not allocated on any nodes.
          • stopping: The deployment is preparing to stop and deallocate the model from the relevant nodes.
          • failed: The deployment is on a failed state and must be re-deployed.

          Values are started, starting, stopping, or failed.

        • threads_per_allocation number

          The number of threads used be each allocation during inference.

        • timeout_count number

          The sum of timeout_count for all nodes in the deployment.

      • inference_stats object

        A collection of inference stats fields.

        Hide inference_stats attributes Show inference_stats attributes object
        • cache_miss_count number Required

          The number of times the model was loaded for inference and was not retrieved from the cache. If this number is close to the inference_count, the cache is not being appropriately used. This can be solved by increasing the cache size or its time-to-live (TTL). Refer to general machine learning settings for the appropriate settings.

        • failure_count number Required

          The number of failures when using the model for inference.

        • inference_count number Required

          The total number of times the model has been called for inference. This is across all inference contexts, including all pipelines.

        • missing_all_fields_count number Required

          The number of inference calls where all the training features for the model were missing.

      • ingest object

        A collection of ingest stats for the model across all nodes. The values are summations of the individual node statistics. The format matches the ingest section in the nodes stats API.

        Hide ingest attribute Show ingest attribute object
        • * object Additional properties
      • model_id string Required

        The unique identifier of the trained model.

      • model_size_stats object Required

        A collection of model size stats.

        Hide model_size_stats attributes Show model_size_stats attributes object
        • model_size_bytes
        • required_native_memory_bytes
      • pipeline_count number Required

        The number of ingest pipelines that currently refer to the model.

GET /_ml/trained_models/{model_id}/_stats
GET _ml/trained_models/_stats
resp = client.ml.get_trained_models_stats()
const response = await client.ml.getTrainedModelsStats();
response = client.ml.get_trained_models_stats
$resp = $client->ml()->getTrainedModelsStats();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/trained_models/_stats"
client.ml().getTrainedModelsStats(g -> g);