events history

Hi Guys,

I installed two weeks sensu with uchiwa for my first time.

I am trying to look for old events (what happen last week) and I don’t see any option to get it.

how do you see old events in sensu?

I found only this api http://localhost:4567/clients/client_name/history and I see there only the last execution time and I am looking for events that triggered warning or critical thresholds

Thanks,

Yossi

Sensu doesn't have an internal capability to store long term data like
that, it is more like an event processor.

I'm not aware of an existing sensu extension to store all of the
historical event data somewhere. (like a sensudb?) But I sure wish
someone would make one! :slight_smile:
The only other thing you could use out-of-the-box is the sensu logs.

···

On Sat, Aug 15, 2015 at 11:46 PM, Yossi Nachum <nachum234@gmail.com> wrote:

Hi Guys,

I installed two weeks sensu with uchiwa for my first time.

I am trying to look for old events (what happen last week) and I don't see
any option to get it.

how do you see old events in sensu?

I found only this api http://localhost:4567/clients/client_name/history and
I see there only the last execution time and I am looking for events that
triggered warning or critical thresholds

Thanks,
Yossi

We use a modified logstash handler (https://github.com/sensu-plugins/sensu-plugins-logstash/blob/master/bin/handler-logstash.rb) to write events to our ELK stack, that way we can see event history.

At one point I experimented with writing a small Sinatra app to allow us to embed the logs from elasticsearch in Uchiwa, here’s a screenshot: http://a.disquscdn.com/uploads/mediaembed/images/1439/1041/original.jpg?w=800&h

···

On 16 August 2015 at 17:13, Kyle Anderson kyle@xkyle.com wrote:

Sensu doesn’t have an internal capability to store long term data like

that, it is more like an event processor.

I’m not aware of an existing sensu extension to store all of the

historical event data somewhere. (like a sensudb?) But I sure wish

someone would make one! :slight_smile:

The only other thing you could use out-of-the-box is the sensu logs.

On Sat, Aug 15, 2015 at 11:46 PM, Yossi Nachum nachum234@gmail.com wrote:

Hi Guys,

I installed two weeks sensu with uchiwa for my first time.

I am trying to look for old events (what happen last week) and I don’t see

any option to get it.

how do you see old events in sensu?

I found only this api http://localhost:4567/clients/client_name/history and

I see there only the last execution time and I am looking for events that

triggered warning or critical thresholds

Thanks,

Yossi

Thanks it looks like good solutions for what I was looking for

Do you have any suggestions for maps like Nagvis (maps for nagios)?

···

On Sun, Aug 16, 2015 at 7:21 PM, Rob roobert@gmail.com wrote:

We use a modified logstash handler (https://github.com/sensu-plugins/sensu-plugins-logstash/blob/master/bin/handler-logstash.rb) to write events to our ELK stack, that way we can see event history.

At one point I experimented with writing a small Sinatra app to allow us to embed the logs from elasticsearch in Uchiwa, here’s a screenshot: http://a.disquscdn.com/uploads/mediaembed/images/1439/1041/original.jpg?w=800&h

On 16 August 2015 at 17:13, Kyle Anderson kyle@xkyle.com wrote:

Sensu doesn’t have an internal capability to store long term data like

that, it is more like an event processor.

I’m not aware of an existing sensu extension to store all of the

historical event data somewhere. (like a sensudb?) But I sure wish

someone would make one! :slight_smile:

The only other thing you could use out-of-the-box is the sensu logs.

On Sat, Aug 15, 2015 at 11:46 PM, Yossi Nachum nachum234@gmail.com wrote:

Hi Guys,

I installed two weeks sensu with uchiwa for my first time.

I am trying to look for old events (what happen last week) and I don’t see

any option to get it.

how do you see old events in sensu?

I found only this api http://localhost:4567/clients/client_name/history and

I see there only the last execution time and I am looking for events that

triggered warning or critical thresholds

Thanks,

Yossi

Does anyone has new solution for this problem now ? I am using influxdb+grafana for metrics and i would like to store events as well. Without event history it would be very difficult to triage anythings.

···

On Monday, August 17, 2015 at 12:22:14 PM UTC+5:30, Yossi Nachum wrote:

Thanks it looks like good solutions for what I was looking for

Do you have any suggestions for maps like Nagvis (maps for nagios)?

On Sun, Aug 16, 2015 at 7:21 PM, Rob roo...@gmail.com wrote:

We use a modified logstash handler (https://github.com/sensu-plugins/sensu-plugins-logstash/blob/master/bin/handler-logstash.rb) to write events to our ELK stack, that way we can see event history.

At one point I experimented with writing a small Sinatra app to allow us to embed the logs from elasticsearch in Uchiwa, here’s a screenshot: http://a.disquscdn.com/uploads/mediaembed/images/1439/1041/original.jpg?w=800&h

On 16 August 2015 at 17:13, Kyle Anderson ky...@xkyle.com wrote:

Sensu doesn’t have an internal capability to store long term data like

that, it is more like an event processor.

I’m not aware of an existing sensu extension to store all of the

historical event data somewhere. (like a sensudb?) But I sure wish

someone would make one! :slight_smile:

The only other thing you could use out-of-the-box is the sensu logs.

On Sat, Aug 15, 2015 at 11:46 PM, Yossi Nachum nach...@gmail.com wrote:

Hi Guys,

I installed two weeks sensu with uchiwa for my first time.

I am trying to look for old events (what happen last week) and I don’t see

any option to get it.

how do you see old events in sensu?

I found only this api http://localhost:4567/clients/client_name/history and

I see there only the last execution time and I am looking for events that

triggered warning or critical thresholds

Thanks,

Yossi

You can have Sensu send events to an Elastic stack (Logstash, Elasticsearch, Kibana, etc) - that should be possible with the logstash handler as far as I remember.

However, that way you will only get entries for warnings and alerts but not when things turn okay again. Do that you’d have to specify the type of the check as metric; metrics always trigger handlers, even exit code 0 of the check script.

I wouldn’t want to that though - without having tested it I would think this will lead to a huge performance loss at scale given the amount of extra handling required.

Keep in mind that this advice is theoretical and I do not have data to support my theory.

···

On Wednesday, March 1, 2017 at 11:45:45 AM UTC+1, GautumAni wrote:

Does anyone has new solution for this problem now ? I am using influxdb+grafana for metrics and i would like to store events as well. Without event history it would be very difficult to triage anythings.

Comments inline.

···

On 02-Mar-2017, at 3:00 PM, Alexander Skiba ghostlyrics@gmail.com wrote:

On Wednesday, March 1, 2017 at 11:45:45 AM UTC+1, GautumAni wrote:

Does anyone has new solution for this problem now ? I am using influxdb+grafana for metrics and i would like to store events as well. Without event history it would be very difficult to triage anythings.

You can have Sensu send events to an Elastic stack (Logstash, Elasticsearch, Kibana, etc) - that should be possible with the logstash handler as far as I remember.

However, that way you will only get entries for warnings and alerts but not when things turn okay again. Do that you’d have to specify the type of the check as metric; metrics always trigger handlers, even exit code 0 of the check script.

I wouldn’t want to that though - without having tested it I would think this will lead to a huge performance loss at scale given the amount of extra handling required.

Keep in mind that this advice is theoretical and I do not have data to support my theory.

I use ElasticSearch with Grafana+Kibana for visualisation. One issue is that I am yet to find a

ready made Sensu dashboards for Grafana or Kibana.

Havent had the time to work on a fully featured dashboard yet so for now, I am happy

with basic metrics+analytics that I have in place with Kibana/Grafana.

Regards.

@shankerbalan

Thx for your reply Alexander and Shankar

I am using influxdb + Grafana for metrics and i am avoiding to introduce once more stack by ELK or ELG. That will make my setup more complex.
InfluxDB+Grafana is doing good for metrics but i want to have events history as well something like pnp4nagios does in nagios.

Regards
Anirudh

···

On Friday, March 3, 2017 at 10:05:43 AM UTC+5:30, Shanker Balan wrote:

Comments inline.

On 02-Mar-2017, at 3:00 PM, Alexander Skiba ghost...@gmail.com wrote:

On Wednesday, March 1, 2017 at 11:45:45 AM UTC+1, GautumAni wrote:

Does anyone has new solution for this problem now ? I am using influxdb+grafana for metrics and i would like to store events as well. Without event history it would be very difficult to triage anythings.

You can have Sensu send events to an Elastic stack (Logstash, Elasticsearch, Kibana, etc) - that should be possible with the logstash handler as far as I remember.

However, that way you will only get entries for warnings and alerts but not when things turn okay again. Do that you’d have to specify the type of the check as metric; metrics always trigger handlers, even exit code 0 of the check script.

I wouldn’t want to that though - without having tested it I would think this will lead to a huge performance loss at scale given the amount of extra handling required.

Keep in mind that this advice is theoretical and I do not have data to support my theory.

I use ElasticSearch with Grafana+Kibana for visualisation. One issue is that I am yet to find a

ready made Sensu dashboards for Grafana or Kibana.

Havent had the time to work on a fully featured dashboard yet so for now, I am happy

with basic metrics+analytics that I have in place with Kibana/Grafana.

Regards.

@shankerbalan

I collect all Sensu server and client logs using filebeat. They are parsed with logstash json filter and send to graylog. The beauty of this setup is that you get all OK and failure events without additional server load. This even works if the Sensu server is offline because the client still logs everything...

Hi Philipp,

···

On 09-Mar-2017, at 6:21 PM, Philipp H <hellmi@gmail.com> wrote:

I collect all Sensu server and client logs using filebeat. They are parsed with logstash json filter and send to graylog. The beauty of this setup is that you get all OK and failure events without additional server load. This even works if the Sensu server is offline because the client still logs everything...

Thats indeed a neat idea. I’ll give it a shot myself.

Thanks.
@shankerbalan

Hi Philipp,

Hi Philipp,

I collect all Sensu server and client logs using filebeat. They are parsed with logstash json filter and send to graylog. The beauty of this setup is that you get all OK and failure events without additional server load. This even works if the Sensu server is offline because the client still logs everything...

Thats indeed a neat idea. I’ll give it a shot myself.

So I started implementing filebeat with ELK and I ran into an issue where the
sensu-event timestamp is in microsecond precision while elastic-search only(?) seem to handle
millisec resolution.

shanker@ib-mon1:~$ tail -1 /var/log/sensu/sensu-server.log|jq .timestamp
"2017-04-05T22:07:33.868684+0530”

Replacing @timestamp with sensu event timestamp causes updates to fail as a result.

Curious to know how other have handled the situation.

Regards.
@shankerbalan

···

On 09-Mar-2017, at 8:54 PM, mail@shankerbalan.net wrote:
On 09-Mar-2017, at 6:21 PM, Philipp H <hellmi@gmail.com> wrote:

I send logs from filebeat to logstash. Logstash has a date filter which can parse the timestamp. I can share my configuration if you have problems with logstash.

Hi Philipp,

I send logs from filebeat to logstash. Logstash has a date filter which can parse the timestamp. I can share my configuration if you have problems with logstash.

Thanks for responding.

I put the below logstash 5.3 configs in place over the weekend and it seems to be working well. It
would be great if you can share your config as well.

filter {
  if [type] == "sensu" {
    ruby {
      init => 'require "date"'
      code => '
        t = event.get("timestamp");
        event.set("ts_iso8601", DateTime.parse(t).iso8601(3) );
      '
    }
    date {
      match => ["ts_iso8601" , "ISO8601" ]
      target => "@timestamp"
      add_tag => [ "ts_iso8601_applied" ]
      remove_field => [ "ts_iso8601", "timestamp" ]
    }
    mutate {
      remove_field => [ "event.client.redact" ]
    }
  }
}

And this is how the event now looks like.

{
         "offset" => 8267507,
          "level" => "info",
    "subscribers" => [
        [0] "admins",
        [1] "os:Ubuntu"
    ],
     "input_type" => "log",
         "source" => "/var/log/sensu/sensu-server.log",
        "message" => "publishing check request",
           "type" => "sensu",
           "tags" => [
        [0] "beats",
        [1] "beats_input_codec_plain_applied",
        [2] "ts_iso8601_applied"
    ],
     "@timestamp" => 2017-04-10T05:25:29.397Z,
        "payload" => {
                "occurrences" => 5,
        "high_flap_threshold" => 60,
                 "standalone" => false,
                    "refresh" => 3600,
                     "handle" => true,
                        "ttl" => 300,
                    "timeout" => 15,
                    "command" => "/usr/lib/nagios/plugins/check_procs -w 3 -c 5 -s Z",
                  "aggregate" => false,
                   "handlers" => [
            [0] "mailer",
            [1] "logstash"
        ],
                       "name" => "zombie_procs",
                     "issued" => 1491801929,
         "low_flap_threshold" => 20
    },
       "@version" => "1",
           "beat" => {
        "hostname" => "sensu-nmv-mon1",
            "name" => "sensu-nmv-mon1",
         "version" => "5.3.0"
    },
           "host" => "sensu-nmv-mon1"
}

Thanks again.
@shankerbalan

···

On 09-Apr-2017, at 1:27 PM, Philipp H <hellmi@gmail.com> wrote:

Ok this is my logstash sensu filter:

···

Am Montag, 10. April 2017 07:29:15 UTC+2 schrieb Shanker Balan:

Hi Philipp,

On 09-Apr-2017, at 1:27 PM, Philipp H hel...@gmail.com wrote:

I send logs from filebeat to logstash. Logstash has a date filter which can parse the timestamp. I can share my configuration if you have problems with logstash.

Thanks for responding.

I put the below logstash 5.3 configs in place over the weekend and it seems to be working well. It
would be great if you can share your config as well.

filter {
if [type] == “sensu” {
ruby {
init => ‘require “date”’
code => ’
t = event.get(“timestamp”);
event.set(“ts_iso8601”, DateTime.parse(t).iso8601(3) );

}
date {
match => [“ts_iso8601” , “ISO8601” ]
target => “@timestamp
add_tag => [ “ts_iso8601_applied” ]
remove_field => [ “ts_iso8601”, “timestamp” ]
}
mutate {
remove_field => [ “event.client.redact” ]
}
}
}

And this is how the event now looks like.

{
“offset” => 8267507,
“level” => “info”,
“subscribers” => [
[0] “admins”,
[1] “os:Ubuntu”
],
“input_type” => “log”,
“source” => “/var/log/sensu/sensu-server.log”,
“message” => “publishing check request”,
“type” => “sensu”,
“tags” => [
[0] “beats”,
[1] “beats_input_codec_plain_applied”,
[2] “ts_iso8601_applied”
],
@timestamp” => 2017-04-10T05:25:29.397Z,
“payload” => {
“occurrences” => 5,
“high_flap_threshold” => 60,
“standalone” => false,
“refresh” => 3600,
“handle” => true,
“ttl” => 300,
“timeout” => 15,
“command” => “/usr/lib/nagios/plugins/check_procs -w 3 -c 5 -s Z”,
“aggregate” => false,
“handlers” => [
[0] “mailer”,
[1] “logstash”
],
“name” => “zombie_procs”,
“issued” => 1491801929,
“low_flap_threshold” => 20
},
@version” => “1”,
“beat” => {
“hostname” => “sensu-nmv-mon1”,
“name” => “sensu-nmv-mon1”,
“version” => “5.3.0”
},
“host” => “sensu-nmv-mon1”
}

Thanks again.
@shankerbalan

Hi Philipp,

···

On 10-Apr-2017, at 1:03 PM, Philipp H <hellmi@gmail.com> wrote:

Ok this is my logstash sensu filter:

https://gist.github.com/runningman84/d8e7c254b6c7ccf3094a556029ff346e

Your filter is awesome and I am learning a lot from it. Thank you very much
for sharing it publicly.

Regards.
@shankerbalan