Loading

Elastic Cloud Managed OTLP Endpoint

Serverless Observability

The Elastic Cloud Managed OTLP Endpoint allows you to send OpenTelemetry data directly to Elastic Cloud using the OTLP protocol, with Elastic handling scaling, data processing, and storage. The Managed OTLP endpoint can act like a Gateway Collector, so that you can point your OpenTelemetry SDKs or Collectors to it.

This guide explains how to find your Elastic Cloud Managed OTLP Endpoint endpoint, create an API key for authentication, and configure different environments.

Important

The Elastic Cloud Managed OTLP Endpoint endpoint is available on Elastic Cloud Serverless and will soon be supported on Elastic Cloud Hosted. It is not available for self-managed deployments.

This diagram shows data ingest using Elastic Distribution of OpenTelemetry and the Elastic Cloud Managed OTLP Endpoint:

mOTLP Reference architecture

For a detailed comparison of how EDOT data streams differ from classic Elastic APM data streams, refer to EDOT data streams compared to classic APM.

Telemetry is stored in Elastic in OTLP format, preserving resource attributes and original semantic conventions. If no specific dataset or namespace is provided, the data streams are: traces-generic.otel-default, metrics-generic.otel-default, and logs-generic.otel-default.

You don't need to use APM Server when ingesting data through the Managed OTLP Endpoint. The APM integration (.apm endpoint) is a legacy ingest path that only supports traces and translates OTLP telemetry to ECS, whereas Elastic Cloud Managed OTLP Endpoint natively ingests OTLP data for logs, metrics, and traces.

To send data to Elastic through the Elastic Cloud Managed OTLP Endpoint, follow the Send data to the Elastic Cloud Managed OTLP Endpoint quickstart.

You can route logs to dedicated datasets by setting the data_stream.dataset attribute to the log record. This attribute is used to route the log to the corresponding dataset.

For example, if you want to route the EDOT Cloud Forwarder logs to custom datasets, you can add the following attributes to the log records:

processors:
  transform:
    log_statements:
      - set(log.attributes["data_stream.dataset"], "aws.cloudtrail") where log.attributes["aws.cloudtrail.event_id"] != nil

You can also set the OTEL_RESOURCE_ATTRIBUTES environment variable to set the data_stream.dataset attribute for all logs. For example:

export OTEL_RESOURCE_ATTRIBUTES="data_stream.dataset=app.orders"

The Elastic Cloud Managed OTLP Endpoint endpoint is designed to be highly available and resilient. However, there are some scenarios where data might be lost or not sent completely. The Failure store is a mechanism that allows you to recover from these scenarios.

The Failure store is always enabled for Elastic Cloud Managed OTLP Endpoint data streams. This prevents ingest pipeline exceptions and conflicts with data stream mappings. Failed documents are stored in a separate index. You can view the failed documents from the Data Set Quality page. Refer to Data set quality.

The following limitations apply when using the Elastic Cloud Managed OTLP Endpoint:

  • Tail-based sampling (TBS) is not available.
  • Universal Profiling is not available.
  • Only supports histograms with delta temporality. Cumulative histograms are dropped.
  • Latency distributions based on histogram values have limited precision due to the fixed boundaries of explicit bucket histograms.

For more information on billing, refer to Elastic Cloud pricing.

Requests to the Elastic Cloud Managed OTLP Endpoint are subject to rate limiting. If you send data at a rate that exceeds the defined limits, your requests will be temporarily rejected.

The rate limit is currently set to 15 MB/s per second, with a burst limit of 30 MB/s per second. As long as your data ingestion rate stays at or below this average, your requests will be accepted.

If send data that exceeds the available rate limit, the Elastic Cloud Managed OTLP Endpoint will respond with an HTTP 429 Too Many Requests status code. A log message similar to this will appear in the OpenTelemetry Collector's output:

{
  "code": 8,
  "message": "error exporting items, request to <ingest endpoint> responded with HTTP Status Code 429"
}

Once your sending rate drops back within the allowed limit, the system will automatically begin accepting requests again.

Note

If you need to increase the rate limit, reach out to Elastic Support.