Skip to content

Latest commit

 

History

History

sumologicexporter

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Sumo Logic Exporter

Status
Stability beta: metrics, logs, traces
Distributions contrib
Issues Open issues Closed issues
Code Owners @rnishtala-sumo
Emeritus @aboguszewski-sumo, @kasia-kujawa, @mat-rumian, @sumo-drosiek

Migration to new architecture

This exporter is undergoing major changes right now.

For some time we have been developing the new Sumo Logic exporter and now we are in the process of moving it into this repository.

The following options are no longer supported:

  • metric_format: {carbon2, graphite}
  • metadata_attributes: [<regex>]
  • graphite_template: <template>
  • source_category: <template>
  • source_name: <template>
  • source_host: <template>

After the new exporter will be moved to this repository:

  • carbon2 and graphite are no longer supported and prometheus or otlp format should be used

  • all resource level attributes are treated as metadata_attributes so this option is no longer supported. You can use Group by Attributes processor to move attributes from record level to resource level. For example:

    # before switch to new collector
    exporters:
      sumologic:
        metadata_attribute:
          - my_attribute
    # after switch to new collector
    processors:
      groupbyattrs:
        keys:
          - my_attribute
  • Source templates (source_category, source_name and source_host) are no longer supported. We recommend to use Transform Processor. For example:

    # before switch to new collector
    exporters:
      sumologic:
        source_category: "%{foo}/constant/%{bar}"
    # after switch to new collector
    processors:
      transformprocessor:
        log_statements:
          context: log
          statements:
            # set default value to unknown
            - set(attributes["foo"], "unknown") where attributes["foo"] == nil
            - set(attributes["bar"], "unknown") where attributes["foo"] == nil
            # set _sourceCategory as "%{foo}/constant/%{bar}"
            - set(resource.attributes["_sourceCategory"], Concat([attributes["foo"], "/constant/", attributes["bar"]], ""))

Configuration

This exporter supports sending logs and metrics data to Sumo Logic. Traces are exported using native otlphttp exporter as described here

Configuration is specified via the yaml in the following structure:

exporters:
  # ...
  sumologic:
    # unique URL generated for your HTTP Source, this is the address to send data to
    endpoint: <HTTP_Source_URL>
    # Compression encoding format, empty string means no compression, default = gzip
    compression: {gzip, deflate, zstd, ""}
    # max HTTP request body size in bytes before compression (if applied),
    # default = 1_048_576 (1MB)
    max_request_body_size: <max_request_body_size>

    # format to use when sending logs to Sumo Logic, default = otlp,
    # see Sumo Logic documentation for details regarding log formats:
    # https://help.sumologic.com/docs/send-data/opentelemetry-collector/data-source-configurations/mapping-records-resources/
    log_format: {otlp, json, text}

    # format to use when sending metrics to Sumo Logic, default = otlp,
    # NOTE: only `otlp` is supported when used with sumologicextension
    metric_format: {otlp, prometheus}

    # Decompose OTLP Histograms into individual metrics, similar to how they're represented in Prometheus format.
    # The Sumo OTLP source currently doesn't support Histograms, and they are quietly dropped. This option produces
    # metrics similar to when metric_format is set to prometheus.
    # default = false
    decompose_otlp_histograms: {true, false}

    # timeout is the timeout for every attempt to send data to the backend,
    # maximum connection timeout is 55s, default = 30s
    timeout: <timeout>

    # defines if sticky session support is enable.
    # default=false
    sticky_session_enabled: {true, false}

    # for below described queueing and retry related configuration please refer to:
    # https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/exporterhelper/README.md#configuration

    retry_on_failure:
      # default = true
      enabled: {true, false}
      # time to wait after the first failure before retrying;
      # ignored if enabled is false, default = 5s
      initial_interval: <initial_interval>
      # is the upper bound on backoff; ignored if enabled is false, default = 30s
      max_interval: <max_interval>
      # is the maximum amount of time spent trying to send a batch;
      # ignored if enabled is false, default = 120s
      max_elapsed_time: <max_elapsed_time>

    sending_queue:
      # default = false
      enabled: {true, false}
      # number of consumers that dequeue batches; ignored if enabled is false,
      # default = 10
      num_consumers: <num_consumers>
      # when set, enables persistence and uses the component specified as a storage extension for the persistent queue
      # make sure to configure and add a `file_storage` extension in `service.extensions`.
      # default = None
      storage: <storage_name>
      # maximum number of batches kept in memory before data;
      # ignored if enabled is false, default = 1000
      #
      # user should calculate this as num_seconds * requests_per_second where:
      # num_seconds is the number of seconds to buffer in case of a backend outage,
      # requests_per_second is the average number of requests per seconds.
      queue_size: <queue_size>

Source Templates

Source Templates are no longer supported. Please follow Migration to new architecture