Menu
General availability (GA) Open source Grafana Cloud

otelcol.exporter.kafka

otelcol.exporter.kafka accepts logs, metrics, and traces telemetry data from other otelcol components and sends it to Kafka.

It’s important to use otelcol.exporter.kafka together with otelcol.processor.batch to make sure otelcol.exporter.kafka doesn’t slow down due to sending Kafka a huge number of small payloads.

Note

otelcol.exporter.kafka is a wrapper over the upstream OpenTelemetry Collector kafka exporter from the otelcol-contrib distribution. Bug reports or feature requests will be redirected to the upstream repository, if necessary.

Multiple otelcol.exporter.kafka components can be specified by giving them different labels.

Usage

alloy
otelcol.exporter.kafka "LABEL" {
  protocol_version = "PROTOCOL_VERSION"
}

Arguments

You can use the following arguments with otelcol.exporter.kafka:

NameTypeDescriptionDefaultRequired
protocol_versionstringKafka protocol version to use.yes
brokerslist(string)Kafka brokers to connect to.["localhost:9092"]no
client_idstringConsumer client ID to use. The ID will be used for all produce requests."sarama"no
encodingstring(Deprecated) Encoding of payload read from Kafka."otlp_proto"no
partition_metrics_by_resource_attributesboolWhether to include the hash of sorted resource attributes as the message partitioning key in metric messages sent to Kafka.falseno
partition_traces_by_idboolWhether to include the trace ID as the message key in trace messages sent to Kafka.falseno
resolve_canonical_bootstrap_servers_onlyboolWhether to resolve then reverse-lookup broker IPs during startup.falseno
timeoutdurationThe timeout for every attempt to send data to the backend."5s"no
topic_from_attributestringA resource attribute whose value should be used as the message’s topic.""no
topicstring(Deprecated) Kafka topic to send to.See belowno

Warning

The topic and encoding arguments are deprecated in favor of the [logs][logs], [metrics][metrics], and [traces][traces] blocks.

When topic_from_attribute is set, it will take precedence over the topic arguments in logs, metrics, and traces blocks.

partition_traces_by_id doesn’t have any effect on Jaeger encoding exporters since Jaeger exporters include trace ID as the message key by default.

Blocks

You can use the following blocks with otelcol.exporter.kafka:

BlockDescriptionRequired
authenticationConfigures authentication for connecting to Kafka brokers.no
authentication > kerberosAuthenticates against Kafka brokers with Kerberos.no
authentication > plaintextAuthenticates against Kafka brokers with plaintext.no
authentication > saslAuthenticates against Kafka brokers with SASL.no
authentication > sasl > aws_mskAdditional SASL parameters when using AWS_MSK_IAM.no
authentication > tlsConfigures TLS for connecting to the Kafka brokers.no
debug_metricsConfigures the metrics which this component generates to monitor its state.no
logsConfigures how to send logs to Kafka brokers.no
metadataConfigures how to retrieve metadata from Kafka brokers.no
metadata > retryConfigures how to retry metadata retrieval.no
metricsConfigures how to send metrics to Kafka brokers.no
producerKafka producer configuration,no
retry_on_failureConfigures retry mechanism for failed requests.no
sending_queueConfigures batching of data before sending.no
tlsConfigures TLS for connecting to the Kafka brokers.no
tracesConfigures how to send traces to Kafka brokers.no

The > symbol indicates deeper levels of nesting. For example, authentication > tls refers to a tls block defined inside an authentication block.

logs

The logs block configures how to send logs to Kafka brokers.

NameTypeDescriptionDefaultRequired
encodingstringThe encoding for logs. Refer to Supported encodings."otlp_proto"no
topicstringThe name of the Kafka topic to which logs will be exported."otlp_logs"no

metrics

The metrics block configures how to send metrics to Kafka brokers.

NameTypeDescriptionDefaultRequired
encodingstringThe encoding for logs. Refer to Supported encodings."otlp_proto"no
topicstringThe name of the Kafka topic to which metrics will be exported."otlp_metrics"no

traces

The traces block configures how to send traces to Kafka brokers.

NameTypeDescriptionDefaultRequired
encodingstringThe encoding for logs. Refer to Supported encodings."otlp_proto"no
topicstringThe name of the Kafka topic to which traces will be exported."otlp_spans"no

authentication

The authentication block holds the definition of different authentication mechanisms to use when connecting to Kafka brokers. It doesn’t support any arguments and is configured fully through inner blocks.

kerberos

The kerberos block configures Kerberos authentication against the Kafka broker.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
config_filestringPath to Kerberos location, for example, /etc/krb5.conf.no
disable_fast_negotiationboolDisable PA-FX-FAST negotiation.falseno
keytab_filestringPath to keytab file, for example, /etc/security/kafka.keytab.no
passwordsecretKerberos password to authenticate with.no
realmstringKerberos realm.no
service_namestringKerberos service name.no
use_keytabstringEnables using keytab instead of password.no
usernamestringKerberos username to authenticate as.yes

When use_keytab is false, the password argument is required. When use_keytab is true, the file pointed to by the keytab_file argument is used for authentication instead. At most one of password or keytab_file must be provided.

disable_fast_negotiation is useful for Kerberos implementations which don’t support PA-FX-FAST (Pre-Authentication Framework - Fast) negotiation.

plaintext

Caution

The plaintext block has been deprecated. Use sasl with mechanism set to PLAIN instead.

The plaintext block configures plain text authentication against Kafka brokers.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
passwordsecretPassword to use for plain text authentication.yes
usernamestringUsername to use for plain text authentication.yes

sasl

The sasl block configures SASL authentication against Kafka brokers.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
mechanismstringSASL mechanism to use when authenticating.yes
passwordsecretPassword to use for SASL authentication.yes
usernamestringUsername to use for SASL authentication.yes
versionnumberVersion of the SASL Protocol to use when authenticating.0no

You can set the mechanism argument to one of the following strings:

  • "PLAIN"
  • "AWS_MSK_IAM"
  • "SCRAM-SHA-256"
  • "SCRAM-SHA-512"
  • "AWS_MSK_IAM_OAUTHBEARER"

When mechanism is set to "AWS_MSK_IAM", the aws_msk child block must also be provided.

You can set the version argument to either 0 or 1.

aws_msk

The aws_msk block configures extra parameters for SASL authentication when using the AWS_MSK_IAM or AWS_MSK_IAM_OAUTHBEARER mechanisms.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
broker_addrstringMSK address to connect to for authentication.yes
regionstringAWS region the MSK cluster is based in.yes

tls

The tls block configures TLS settings used for connecting to the Kafka brokers. If the tls block isn’t provided, TLS won’t be used for communication.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
ca_filestringPath to the CA file.no
ca_pemstringCA PEM-encoded text to validate the server with.no
cert_filestringPath to the TLS certificate.no
cert_pemstringCertificate PEM-encoded text for client authentication.no
cipher_suiteslist(string)A list of TLS cipher suites that the TLS transport can use.[]no
curve_preferenceslist(string)Set of elliptic curves to use in a handshake.[]no
include_system_ca_certs_poolbooleanWhether to load the system certificate authorities pool alongside the certificate authority.falseno
insecure_skip_verifybooleanIgnores insecure server TLS certificates.no
insecurebooleanDisables TLS when connecting to the configured server.no
key_filestringPath to the TLS certificate key.no
key_pemsecretKey PEM-encoded text for client authentication.no
max_versionstringMaximum acceptable TLS version for connections."TLS 1.3"no
min_versionstringMinimum acceptable TLS version for connections."TLS 1.2"no
reload_intervaldurationThe duration after which the certificate is reloaded."0s"no
server_namestringVerifies the hostname of server certificates when set.no

If the server doesn’t support TLS, you must set the insecure argument to true.

To disable tls for connections to the server, set the insecure argument to true.

If you set reload_interval to "0s", the certificate never reloaded.

The following pairs of arguments are mutually exclusive and can’t both be set simultaneously:

  • ca_pem and ca_file
  • cert_pem and cert_file
  • key_pem and key_file

If cipher_suites is left blank, a safe default list is used. Refer to the Go TLS documentation for a list of supported cipher suites.

The curve_preferences argument determines the set of elliptic curves to prefer during a handshake in preference order. If not provided, a default list is used. The set of elliptic curves available are X25519, P521, P256, and P384.

debug_metrics

The debug_metrics block configures the metrics that this component generates to monitor its state.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
disable_high_cardinality_metricsbooleanWhether to disable certain high cardinality metrics.trueno

disable_high_cardinality_metrics is the Alloy equivalent to the telemetry.disableHighCardinalityMetrics feature gate in the OpenTelemetry Collector. It removes attributes that could cause high cardinality metrics. For example, attributes with IP addresses and port numbers in metrics about HTTP and gRPC connections are removed.

Note

If configured, disable_high_cardinality_metrics only applies to otelcol.exporter.* and otelcol.receiver.* components.

metadata

The metadata block configures how to retrieve and store metadata from the Kafka broker.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
fullboolWhether to maintain a full set of metadata.trueno
refresh_intervaldurationThe frequency at which cluster metadata is refreshed."10m"no

When full is set to false, the client does not make the initial request to broker at the startup.

Retrieving metadata may fail if the Kafka broker is starting up at the same time as the Alloy component. The retry child block can be provided to customize retry behavior.

retry

The retry block configures how to retry retrieving metadata when retrieval fails.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
backoffdurationTime to wait between retries."250ms"no
max_retriesnumberHow many times to reattempt retrieving metadata.3no

producer

The producer block configures how to retry retrieving metadata when retrieval fails.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
compressionstringTime to wait between retries."none"no
flush_max_messagesnumberTime to wait between retries.0no
max_message_bytesnumberThe maximum permitted size of a message in bytes.1000000no
required_acksnumberControls when a message is regarded as transmitted.1no

Refer to the Go sarama documentation for more information on required_acks.

compression could be set to either none, gzip, snappy, lz4, or zstd. Refer to the Go sarama documentation for more information.

retry_on_failure

The retry_on_failure block configures how failed requests to Kafka are retried.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
enabledbooleanEnables retrying failed requests.trueno
initial_intervaldurationInitial time to wait before retrying a failed request."5s"no
max_elapsed_timedurationMaximum time to wait before discarding a failed batch."5m"no
max_intervaldurationMaximum time to wait between retries."30s"no
multipliernumberFactor to grow wait time before retrying.1.5no
randomization_factornumberFactor to randomize wait time before retrying.0.5no

When enabled is true, failed batches are retried after a given interval. The initial_interval argument specifies how long to wait before the first retry attempt. If requests continue to fail, the time to wait before retrying increases by the factor specified by the multiplier argument, which must be greater than 1.0. The max_interval argument specifies the upper bound of how long to wait between retries.

The randomization_factor argument is useful for adding jitter between retrying Alloy instances. If randomization_factor is greater than 0, the wait time before retries is multiplied by a random factor in the range [ I - randomization_factor * I, I + randomization_factor * I], where I is the current interval.

If a batch hasn’t been sent successfully, it’s discarded after the time specified by max_elapsed_time elapses. If max_elapsed_time is set to "0s", failed requests are retried forever until they succeed.

sending_queue

The sending_queue block configures an in-memory buffer of batches before data is sent to the gRPC server.

The following arguments are supported:

NameTypeDescriptionDefaultRequired
block_on_overflowbooleanThe behavior when the component’s TotalSize limit is reached.falseno
blockingboolean(Deprecated) If true, blocks until the queue has room for a new request.falseno
enabledbooleanEnables a buffer before sending data to the client.trueno
num_consumersnumberNumber of readers to send batches written to the queue in parallel.10no
queue_sizenumberMaximum number of unwritten batches allowed in the queue at the same time.1000no
sizerstringHow the queue and batching is measured."requests"no
storagecapsule(otelcol.Handler)Handler from an otelcol.storage component to use to enable a persistent queue mechanism.no

The blocking argument is deprecated in favor of the block_on_overflow argument.

When block_on_overflow is true, the component will wait for space. Otherwise, operations will immediately return a retryable error.

When enabled is true, data is first written to an in-memory buffer before sending it to the configured server. Batches sent to the component’s input exported field are added to the buffer as long as the number of unsent batches doesn’t exceed the configured queue_size.

queue_size determines how long an endpoint outage is tolerated. Assuming 100 requests/second, the default queue size 1000 provides about 10 seconds of outage tolerance. To calculate the correct value for queue_size, multiply the average number of outgoing requests per second by the time in seconds that outages are tolerated. A very high value can cause Out Of Memory (OOM) kills.

The sizer argument could be set to:

  • requests: number of incoming batches of metrics, logs, traces (the most performant option).
  • items: number of the smallest parts of each signal (spans, metric data points, log records).
  • bytes: the size of serialized data in bytes (the least performant option).

The num_consumers argument controls how many readers read from the buffer and send data in parallel. Larger values of num_consumers allow data to be sent more quickly at the expense of increased network traffic.

If an otelcol.storage.* component is configured and provided in the queue’s storage argument, the queue uses the provided storage extension to provide a persistent queue and the queue is no longer stored in memory. Any data persisted will be processed on startup if Alloy is killed or restarted. Refer to the exporterhelper documentation in the OpenTelemetry Collector repository for more details.

Exported fields

The following fields are exported and can be referenced by other components:

NameTypeDescription
inputotelcol.ConsumerA value that other components can use to send telemetry data to.

input accepts otelcol.Consumer data for any telemetry signal (metrics, logs, or traces).

Supported encodings

otelcol.exporter.kafka supports encoding extensions, as well as the following built-in encodings.

Available for all signals:

  • otlp_proto: Data is encoded as OTLP Protobuf.
  • otlp_json: Data is encoded as OTLP JSON.

Available only for traces:

  • jaeger_proto: The payload is serialized to a single Jaeger proto Span, and keyed by TraceID.
  • jaeger_json: The payload is serialized to a single Jaeger JSON Span using jsonpb, and keyed by TraceID.
  • zipkin_proto: The payload is serialized to Zipkin v2 proto Span.
  • zipkin_json: The payload is serialized to Zipkin v2 JSON Span.

Available only for logs:

  • raw: If the log record body is a byte array, it is sent as is. Otherwise, it is serialized to JSON. Resource and record attributes are discarded.

Component health

otelcol.exporter.kafka is only reported as unhealthy if given an invalid configuration.

Debug information

otelcol.exporter.kafka doesn’t expose any component-specific debug information.

Example

This example forwards telemetry data through a batch processor before finally sending it to Kafka:

alloy
otelcol.receiver.otlp "default" {
  http {}
  grpc {}

  output {
    metrics = [otelcol.processor.batch.default.input]
    logs    = [otelcol.processor.batch.default.input]
    traces  = [otelcol.processor.batch.default.input]
  }
}

otelcol.processor.batch "default" {
  output {
    metrics = [otelcol.exporter.kafka.default.input]
    logs    = [otelcol.exporter.kafka.default.input]
    traces  = [otelcol.exporter.kafka.default.input]
  }
}

otelcol.exporter.kafka "default" {
  brokers          = ["localhost:9092"]
  protocol_version = "2.0.0"
}

Compatible components

otelcol.exporter.kafka has exports that can be consumed by the following components:

Note

Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.