Hi Chris,
Comments inline.
Howdy!
So I'd like to do the following, but I don't see a "good way" to do this.
I would like to take the result from every check on every iteration and send the return 0/1/2 to influxdb for graphing and analisys.
This is to allow for dashboards for trending uptime of specific checks over a long duration of time in order to do check uptime reports.
I do _not_ want to do the metrics based collection/sending as that is handled via telegraf already. I do not want to double-up there.
I too started out with InfluxDB and later moved to filebeat+Elasticsearch based on inputs from this list.
There seems to be a lack of documentation for doing something like this.
Does anyone have any pointers or places to start?
My setup is as follows:
- Run Filebeat on the sensu-server logs to feed Logstash. Since sensu-server logs are
already in JSON, no additional parsing required
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
paths:
- /var/log/sensu/sensu-server.log
enabled: true
tags: ["json", "sensu" ]
json.keys_under_root: true
json.add_error_key: true
- In Logstash, I drop everything other than “processing events” which are sent to ElasticSearch
filter {
if "sensu" in [tags] {
if [message] == "processing event" {
date {
match => [ "[event][timestamp]", "UNIX"]
remove_field => [
"offset",
"prospector",
"source",
"timestamp",
"[event][client][message]",
"[event][client][redact]",
"[event][client][socket]",
"[event][client][subscriptions]",
"[event][timestamp]",
"[event][client][version]"
]
add_tag => [ "ts_updated" ]
}
} else {
drop { }
}
}
}
- Once events are in ES, then write custom Grafana dashboards or use Kibana visualisation. I prefer
Grafana as I can use mixed data sources (ELK+Prometheus) for my Sensu Analytics dashboard.
Regards.
@shankerbalan
···
On 06-Mar-2018, at 05:14, cmeisinger@thecitybase.com wrote: