I’d like to clarify the integration mechanisms available for sensu-go.
From reading documentation and Googling around I’ve found the following.
The pipe handler sends data to a handler process via stdin. The documentation doesn’t make this clear, but I think it’s one-shot: that is, it spawns a new process and hands it one event. Is that correct?
There is a socket handler, and again it’s not clear, but I think it opens a separate TCP connection for every event.
You can run external mutator commands, which also accept (JSON) input on stdin. Again it’s unclear whether these are one-shot or persistent subprocesses.
I found a blog post suggesting that you could extend sensu with external filter programs using gRPC (on dzone.com, at /articles/migrating-to-sensu-20-the-good-the-bad-and-the-ugly) … but I can’t find anything in the official documentation about this. Is the blog post wrong?
I also found mention of an extension package called “WizardVan” which opens a persistent socket to graphite or opentsdb. I’m guessing that’s sensu core only, and is replaced by the socket handler in sensu-go.
I note that you can write filter expressions in Javascript which use external libraries, and that potentially might be an extension point.
Now, I have a couple of use cases in mind, and I’m trying to work out the best way to integrate.
-
Collecting bulk metrics via sensu, and writing them to a time-series database like influxdb.
-
Collecting bulk metrics and events via sensu, and writing them out to kafka, from where they’d go to other backends. This potentially supercedes (1) as I could do the metric-to-database flow from a kafka reader.
If I want to avoid forking a process for every event, what’s the best way to handle these scenarios today? Would it be to use the socket handler, and write a process which accepts events and then writes them out?
Thanks!