Sending check results to influxdb

Howdy!

So I’d like to do the following, but I don’t see a “good way” to do this.

I would like to take the result from every check on every iteration and send the return 0/1/2 to influxdb for graphing and analisys.

This is to allow for dashboards for trending uptime of specific checks over a long duration of time in order to do check uptime reports.

I do not want to do the metrics based collection/sending as that is handled via telegraf already. I do not want to double-up there.

There seems to be a lack of documentation for doing something like this.

Does anyone have any pointers or places to start?

Thank you!

Chris

Hi Chris,

Comments inline.

Howdy!

So I'd like to do the following, but I don't see a "good way" to do this.

I would like to take the result from every check on every iteration and send the return 0/1/2 to influxdb for graphing and analisys.

This is to allow for dashboards for trending uptime of specific checks over a long duration of time in order to do check uptime reports.

I do _not_ want to do the metrics based collection/sending as that is handled via telegraf already. I do not want to double-up there.

I too started out with InfluxDB and later moved to filebeat+Elasticsearch based on inputs from this list.

There seems to be a lack of documentation for doing something like this.

Does anyone have any pointers or places to start?

My setup is as follows:

- Run Filebeat on the sensu-server logs to feed Logstash. Since sensu-server logs are
already in JSON, no additional parsing required

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log
  paths:
  - /var/log/sensu/sensu-server.log
  enabled: true
  tags: ["json", "sensu" ]
  json.keys_under_root: true
  json.add_error_key: true

- In Logstash, I drop everything other than “processing events” which are sent to ElasticSearch

filter {

  if "sensu" in [tags] {
    if [message] == "processing event" {
      date {
        match => [ "[event][timestamp]", "UNIX"]
        remove_field => [
          "offset",
          "prospector",
          "source",
          "timestamp",
          "[event][client][message]",
          "[event][client][redact]",
          "[event][client][socket]",
          "[event][client][subscriptions]",
          "[event][timestamp]",
          "[event][client][version]"
        ]
        add_tag => [ "ts_updated" ]
      }

    } else {
      drop { }
    }
  }

}

- Once events are in ES, then write custom Grafana dashboards or use Kibana visualisation. I prefer
Grafana as I can use mixed data sources (ELK+Prometheus) for my Sensu Analytics dashboard.

Regards.
@shankerbalan

···

On 06-Mar-2018, at 05:14, cmeisinger@thecitybase.com wrote:

Chris,

I believe what you are looking for is built into the influx/sensu extension. Make sure you’re loading influxdb2 from rubygems, since there’s as similar package which doesn’t have what you’re looking for. This is the configuration for loading the gem, and to get what you want, you’ll need to explicitly include the sensu-extensions-history part.

In Influx, you should see the checks come through as sensu.checks.$check_name. If you want the $check_name as the tag and not part of the measurement, check out the Readme regarding enhanced_history.

Hope that works out for you. Let me know if you have any questions,

Cheers

···

On Mon, Mar 5, 2018 at 6:44 PM, cmeisinger@thecitybase.com wrote:

Howdy!

So I’d like to do the following, but I don’t see a “good way” to do this.

I would like to take the result from every check on every iteration and send the return 0/1/2 to influxdb for graphing and analisys.

This is to allow for dashboards for trending uptime of specific checks over a long duration of time in order to do check uptime reports.

I do not want to do the metrics based collection/sending as that is handled via telegraf already. I do not want to double-up there.

There seems to be a lack of documentation for doing something like this.

Does anyone have any pointers or places to start?

Thank you!

Chris

Steven Viola