Get anomaly detection jobs configuration info Generally available; Added in 5.5.0

GET /_ml/anomaly_detectors/{job_id}

All methods and paths for this operation:

GET /_ml/anomaly_detectors

GET /_ml/anomaly_detectors/{job_id}

You can get information for multiple anomaly detection jobs in a single API request by using a group name, a comma-separated list of jobs, or a wildcard expression. You can get information for all anomaly detection jobs by using _all, by specifying * as the <job_id>, or by omitting the <job_id>.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • job_id string | array[string] Required

    Identifier for the anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. If you do not specify one of these options, the API returns information for all anomaly detection jobs.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    1. Contains wildcard expressions and there are no jobs that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value is true, which returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

  • exclude_generated boolean

    Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • jobs array[object] Required
      Hide jobs attributes Show jobs attributes object
      • allow_lazy_open boolean Required

        Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node.

      • analysis_config object Required

        The analysis configuration, which specifies how to analyze the data. After you create a job, you cannot change the analysis configuration; all the properties are informational.

        Hide analysis_config attributes Show analysis_config attributes object
        • bucket_span string

          The size of the interval that the analysis is aggregated into, typically between 5m and 1h. This value should be either a whole number of days or equate to a whole number of buckets in one day. If the anomaly detection job uses a datafeed with aggregations, this value must also be divisible by the interval of the date histogram aggregation.

        • categorization_analyzer
        • categorization_field_name string

          If this property is specified, the values of the specified field will be categorized. The resulting categories must be used in a detector by setting by_field_name, over_field_name, or partition_field_name to the keyword mlcategory.

        • categorization_filters array[string]

          If categorization_field_name is specified, you can also define optional filters. This property expects an array of regular expressions. The expressions are used to filter out matching sequences from the categorization field values. You can use this functionality to fine tune the categorization by excluding sequences from consideration when categories are defined. For example, you can exclude SQL statements that appear in your log files. This property cannot be used at the same time as categorization_analyzer. If you only want to define simple regular expression filters that are applied prior to tokenization, setting this property is the easiest method. If you also want to customize the tokenizer or post-tokenization filtering, use the categorization_analyzer property instead and include the filters as pattern_replace character filters. The effect is exactly the same.

        • detectors array[object] Required

          Detector configuration objects specify which data fields a job analyzes. They also specify which analytical functions are used. You can specify multiple detectors for a job. If the detectors array does not contain at least one detector, no analysis can occur and an error is returned.

        • influencers array[string]

          A comma separated list of influencer field names. Typically these can be the by, over, or partition fields that are used in the detector configuration. You might also want to use a field name that is not specifically named in a detector, but is available as part of the input data. When you use multiple detectors, the use of influencers is recommended as it aggregates results for each influencer entity.

        • latency string

          The size of the window in which to expect data that is out of time order. If you specify a non-zero value, it must be greater than or equal to one second. NOTE: Latency is applicable only when you send data by using the post data API.

        • model_prune_window string

          Advanced configuration option. Affects the pruning of models that have not been updated for the given time duration. The value must be set to a multiple of the bucket_span. If set too low, important information may be removed from the model. For jobs created in 8.1 and later, the default value is the greater of 30d or 20 times bucket_span.

        • multivariate_by_fields boolean

          This functionality is reserved for internal use. It is not supported for use in customer environments and is not subject to the support SLA of official GA features. If set to true, the analysis will automatically find correlations between metrics for a given by field value and report anomalies when those correlations cease to hold. For example, suppose CPU and memory usage on host A is usually highly correlated with the same metrics on host B. Perhaps this correlation occurs because they are running a load-balanced application. If you enable this property, anomalies will be reported when, for example, CPU usage on host A is high and the value of CPU usage on host B is low. That is to say, you’ll see an anomaly when the CPU of host A is unusual given the CPU of host B. To use the multivariate_by_fields property, you must also specify by_field_name in your detector.

        • per_partition_categorization object

          Settings related to how categorization interacts with partition fields.

        • summary_count_field_name string

          If this property is specified, the data that is fed to the job is expected to be pre-summarized. This property value is the name of the field that contains the count of raw data points that have been summarized. The same summary_count_field_name applies to all detectors in the job. NOTE: The summary_count_field_name property cannot be used with the metric function.

      • analysis_limits object

        Limits can be applied for the resources required to hold the mathematical models in memory. These limits are approximate and can be set per job. They do not control the memory used by other processes, for example the Elasticsearch Java processes.

        Hide analysis_limits attributes Show analysis_limits attributes object
        • categorization_examples_limit number

          The maximum number of examples stored per category in memory and in the results data store. If you increase this value, more examples are available, however it requires that you have more storage available. If you set this value to 0, no examples are stored. NOTE: The categorization_examples_limit applies only to analysis that uses categorization.

          Default value is 4.

        • model_memory_limit
      • background_persist_interval string

        Advanced configuration option. The time between each periodic persistence of the model. The default value is a randomized value between 3 to 4 hours, which avoids all jobs persisting at exactly the same time. The smallest allowed value is 1 hour.

      • blocked object
        Hide blocked attributes Show blocked attributes object
        • reason string Required

          Values are delete, reset, or revert.

        • task_id string
      • create_time string | number

        One of:

        Time unit for milliseconds

      • custom_settings object

        Advanced configuration option. Contains custom metadata about the job.

      • daily_model_snapshot_retention_after_days number

        Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job. Valid values range from 0 to model_snapshot_retention_days.

        Default value is 1.

      • data_description object Required

        The data description defines the format of the input data when you send data to the job by using the post data API. Note that when configuring a datafeed, these properties are automatically set. When data is received via the post data API, it is not stored in Elasticsearch. Only the results for anomaly detection are retained.

        Hide data_description attributes Show data_description attributes object
        • format string

          Only JSON format is supported at this time.

        • time_field string

          The name of the field that contains the timestamp.

        • time_format string

          The time format, which can be epoch, epoch_ms, or a custom pattern. The value epoch refers to UNIX or Epoch time (the number of seconds since 1 Jan 1970). The value epoch_ms indicates that time is measured in milliseconds since the epoch. The epoch and epoch_ms time formats accept either integer or real values. Custom patterns must conform to the Java DateTimeFormatter class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: yyyy-MM-dd'T'HH:mm:ssX. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.

          Default value is epoch.

        • field_delimiter string
      • datafeed_config object

        The datafeed, which retrieves data from Elasticsearch for analysis by the job. You can associate only one datafeed with each anomaly detection job.

        Hide datafeed_config attributes Show datafeed_config attributes object
        • aggregations object
        • authorization object

          The security privileges that the datafeed uses to run its queries. If Elastic Stack security features were disabled at the time of the most recent update to the datafeed, this property is omitted.

        • chunking_config object
        • datafeed_id string Required
        • frequency string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • indices array[string] Required
        • indexes array[string]
        • job_id string Required
        • max_empty_searches number
        • query_delay string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • script_fields object
          Hide script_fields attribute Show script_fields attribute object
          • * object Additional properties
        • scroll_size number
        • delayed_data_check_config object Required
        • runtime_mappings object
        • indices_options object

          Controls how to deal with unavailable concrete indices (closed or missing), how wildcard expressions are expanded to actual indices (all, closed or open indices) and how to deal with wildcard expressions that resolve to no indices.

        • query object Required

          The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {"boost": 1}}.

          Query DSL
      • deleting boolean

        Indicates that the process of deleting the job is in progress but not yet completed. It is only reported when true.

      • description string

        A description of the job.

      • finished_time string | number

        If the job closed or failed, this is the time the job finished, otherwise it is null. This property is informational; you cannot change its value.

        One of:

        If the job closed or failed, this is the time the job finished, otherwise it is null. This property is informational; you cannot change its value.

        If the job closed or failed, this is the time the job finished, otherwise it is null. This property is informational; you cannot change its value.

      • groups array[string]

        A list of job groups. A job can belong to no groups or many.

      • job_id string Required

        Identifier for the anomaly detection job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

      • job_type string

        Reserved for future use, currently set to anomaly_detector.

      • job_version string

        The machine learning configuration version number at which the the job was created.

      • model_plot_config object

        This advanced configuration option stores model information along with the results. It provides a more detailed view into anomaly detection. Model plot provides a simplified and indicative view of the model and its bounds.

        Hide model_plot_config attributes Show model_plot_config attributes object
        • annotations_enabled boolean Generally available; Added in 7.9.0

          If true, enables calculation and storage of the model change annotations for each entity that is being analyzed.

          Default value is true.

        • enabled boolean

          If true, enables calculation and storage of the model bounds for each entity that is being analyzed.

          Default value is false.

        • terms string

          Limits data collection to this comma separated list of partition or by field values. If terms are not specified or it is an empty string, no filtering is applied. Wildcards are not supported. Only the specified terms can be viewed when using the Single Metric Viewer.

      • model_snapshot_id string
      • model_snapshot_retention_days number Required

        Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. By default, snapshots ten days older than the newest snapshot are deleted.

      • renormalization_window_days number

        Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. The default value is the longer of 30 days or 100 bucket_spans.

      • results_index_name string Required

        A text string that affects the name of the machine learning results index. The default value is shared, which generates an index named .ml-anomalies-shared.

      • results_retention_days number

        Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. Annotations generated by the system also count as results for retention purposes; they are deleted after the same number of days as results. Annotations added by users are retained forever.

GET /_ml/anomaly_detectors/{job_id}
GET _ml/anomaly_detectors/high_sum_total_sales
resp = client.ml.get_jobs(
    job_id="high_sum_total_sales",
)
const response = await client.ml.getJobs({
  job_id: "high_sum_total_sales",
});
response = client.ml.get_jobs(
  job_id: "high_sum_total_sales"
)
$resp = $client->ml()->getJobs([
    "job_id" => "high_sum_total_sales",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/high_sum_total_sales"
client.ml().getJobs(g -> g
    .jobId("high_sum_total_sales")
);