Sensu Client On CoreOS

Hi,

Is anyone running Sensu client on CoreOS. Since its a container only environment. Any gotchas
I should be aware of?

My plan is to build a sensu client container on alpine Linux and burn in the client.json. Since CoreOS
is actually being used for running Kubernetes, maybe a daemonset is a better option?

Regards.
@shankerbalan

RE: the base container - We built our own as it seemed simpler than trying to work with the other ones we saw out in the wild. It’s public here - https://github.com/pantheon-systems/docker-sensu - but it is minimal and may not work at all for your environment. It’s based on debian:jessie. I did not try alpine. I’d expect the usual alpine gotchas such as musl -vs- gnu libc

RE: daemonset - For me this would depend on what we were monitoring. If we needed to monitor the state of the CoreOS node itself (ie: “underneath Kubernetes”) then the DaemonSet seems like the right approach so that you end up with one pod per node. I imagine you’ll end up having to mount various paths from the hosts if you intend to monitor host state. Network namespacing may become an issue depending on what check scripts you intend to run too.

There is an interesting note in this article that you may run into if you intend to monitor mounts from sensu in this setup: -https://blog.argoproj.io/volume-monitoring-in-kubernetes-with-prometheus-3a185e4c4035

quoted:

After setting up Prometheus and node-exporters, we are able to see metrics

for any existing volumes mounted on the various Kubernetes minion nodes.

However, we quickly >found out that any new volumes are not showing up in

the Prometheus monitoring metrics. **After some investigation, we found that **

**> when a container starts, it inherits a cloned copy of host’s kernel mount **

> namespace, but this copy is not updated as new volumes are mounted.

Thus, any new volumes mounted after node-exporter starts will not be visible to node-exporter.

But if you just want to provide some sensu services “inside Kubernetes” such as to other apps running in Kube to POST /results to sensu, I may suggest a simple Deployment+Service approach. This is what we do at the moment. It provides a couple services:

We also provide a dummy http://sensu-client service in developer’s sandbox namespaces so they can test POSTing sensu events. It simply accepts anything POST’d to its /results URL and logs to stdout. This was easier than standing up a full set of sensu services in each dev namespace. This isn’t public but we could open it up as it’s just a few lines of Go.

This internal sensu-client service works for our uses (so far) but we haven’t done a lot with it yet either.

···

On Wed, Dec 20, 2017 at 4:12 AM mail@shankerbalan.net mail@shankerbalan.net wrote:

Hi,

Is anyone running Sensu client on CoreOS. Since its a container only environment. Any gotchas

I should be aware of?

My plan is to build a sensu client container on alpine Linux and burn in the client.json. Since CoreOS

is actually being used for running Kubernetes, maybe a daemonset is a better option?

Regards.

@shankerbalan

Hi Joe,

Thanks for taking the time to reply. Really appreciate it.

My comments inline…

RE: the base container - We built our own as it seemed simpler than trying to work with the other ones we saw out in the wild. It’s public here - https://github.com/pantheon-systems/docker-sensu - but it is minimal and may not work at all for your environment. It’s based on debian:jessie. I did not try alpine. I’d expect the usual alpine gotchas such as musl -vs- gnu libc

I went thru the Dockerfile, this should work well for my use case as well.

RE: daemonset - For me this would depend on what we were monitoring. If we needed to monitor the state of the CoreOS node itself (ie: “underneath Kubernetes”) then the DaemonSet seems like the right approach so that you end up with one pod per node. I imagine you’ll end up having to mount various paths from the hosts if you intend to monitor host state. Network namespacing may become an issue depending on what check scripts you intend to run too.

My current needs are to monitor CoreOS host state only - disk, mem, cpu, CRIT events etc. I’ll deploy as a DaemonSet and

then course correct as needed.

There is an interesting note in this article that you may run into if you intend to monitor mounts from sensu in this setup: -https://blog.argoproj.io/volume-monitoring-in-kubernetes-with-prometheus-3a185e4c4035

quoted:

After setting up Prometheus and node-exporters, we are able to see metrics

for any existing volumes mounted on the various Kubernetes minion nodes.

However, we quickly >found out that any new volumes are not showing up in

the Prometheus monitoring metrics. **After some investigation, we found that **

**> when a container starts, it inherits a cloned copy of host’s kernel mount **

> namespace, but this copy is not updated as new volumes are mounted.

Thus, any new volumes mounted after node-exporter starts will not be visible to node-exporter.

Interesttng problem. Need to check how prometheus-operator handles it. I do have

node-exporter running as a deaemonset.

But if you just want to provide some sensu services “inside Kubernetes” such as to other apps running in Kube to POST /results to sensu, I may suggest a simple Deployment+Service approach. This is what we do at the moment. It provides a couple services:

  • Exposes an internal clusterIP service for other applications to POST alerts. (http://sensu-client.production, since we have it running in the production namespace)

We also provide a dummy http://sensu-client service in developer’s sandbox namespaces so they can test POSTing sensu events. It simply accepts anything POST’d to its /results URL and logs to stdout. This was easier than standing up a full set of sensu services in each dev namespace. This isn’t public but we could open it up as it’s just a few lines of Go.

This internal sensu-client service works for our uses (so far) but we haven’t done a lot with it yet either.

I am in conversation with my engineering team team to throw exceptions to the sensu endpoint. So I would need to support

clusterIP as well for POST actions.

Your reply has been very helpful. I’ll share updates once I get things running.

Thanks.

@shankerbalan

···

On 21-Dec-2017, at 5:16 AM, Joe Miller joeym@joeym.net wrote:

Awesome! I’d love to hear how your team makes use of Sensu with kubernetes. We are using GKE to manage the cluster itself so don’t have as much need to monitor the host state but we’re always looking for ways to provide sensu services to apps running inside the cluster.

···

On Thu, Dec 21, 2017 at 9:33 PM mail@shankerbalan.net mail@shankerbalan.net wrote:

Hi Joe,

Thanks for taking the time to reply. Really appreciate it.

My comments inline…

On 21-Dec-2017, at 5:16 AM, Joe Miller joeym@joeym.net wrote:

RE: the base container - We built our own as it seemed simpler than trying to work with the other ones we saw out in the wild. It’s public here - https://github.com/pantheon-systems/docker-sensu - but it is minimal and may not work at all for your environment. It’s based on debian:jessie. I did not try alpine. I’d expect the usual alpine gotchas such as musl -vs- gnu libc

I went thru the Dockerfile, this should work well for my use case as well.

RE: daemonset - For me this would depend on what we were monitoring. If we needed to monitor the state of the CoreOS node itself (ie: “underneath Kubernetes”) then the DaemonSet seems like the right approach so that you end up with one pod per node. I imagine you’ll end up having to mount various paths from the hosts if you intend to monitor host state. Network namespacing may become an issue depending on what check scripts you intend to run too.

My current needs are to monitor CoreOS host state only - disk, mem, cpu, CRIT events etc. I’ll deploy as a DaemonSet and

then course correct as needed.

There is an interesting note in this article that you may run into if you intend to monitor mounts from sensu in this setup: -https://blog.argoproj.io/volume-monitoring-in-kubernetes-with-prometheus-3a185e4c4035

quoted:

After setting up Prometheus and node-exporters, we are able to see metrics

for any existing volumes mounted on the various Kubernetes minion nodes.

However, we quickly >found out that any new volumes are not showing up in

the Prometheus monitoring metrics. **After some investigation, we found that **

**> when a container starts, it inherits a cloned copy of host’s kernel mount **

> namespace, but this copy is not updated as new volumes are mounted.

Thus, any new volumes mounted after node-exporter starts will not be visible to node-exporter.

Interesttng problem. Need to check how prometheus-operator handles it. I do have

node-exporter running as a deaemonset.

But if you just want to provide some sensu services “inside Kubernetes” such as to other apps running in Kube to POST /results to sensu, I may suggest a simple Deployment+Service approach. This is what we do at the moment. It provides a couple services:

  • Exposes an internal clusterIP service for other applications to POST alerts. (http://sensu-client.production, since we have it running in the production namespace)

We also provide a dummy http://sensu-client service in developer’s sandbox namespaces so they can test POSTing sensu events. It simply accepts anything POST’d to its /results URL and logs to stdout. This was easier than standing up a full set of sensu services in each dev namespace. This isn’t public but we could open it up as it’s just a few lines of Go.

This internal sensu-client service works for our uses (so far) but we haven’t done a lot with it yet either.

I am in conversation with my engineering team team to throw exceptions to the sensu endpoint. So I would need to support

clusterIP as well for POST actions.

Your reply has been very helpful. I’ll share updates once I get things running.

Thanks.

@shankerbalan

Hi Joe,

Comments inline.

Awesome! I’d love to hear how your team makes use of Sensu with kubernetes. We are using GKE to manage the cluster itself so don’t have as much need to monitor the host state but we’re always looking for ways to provide sensu services to apps running inside the cluster.

We are currently on bare-metal/VMs but are making a organisational shift from colo

to AWS Mumbai + EKS (yet to be launched in AWS Mumbai). Till such time, need to manage

things the hard way.

I currently have a daemonSet deployed and basic checks are in place. I used your Dockerfile

with slight modifications to add monitoring-plugins and sensu-plugins to sensu-client Docker

image.

(1) Templating for client.json - I need to modify subscriptions, FQDN, environment etc

based on Node labels. Kubernetes configMap does not seem to allow for variable substitution

so I guess I need to use initContainer features to make config files

(2) I am yet to figure out how to expose the namespace mounts inside sensu-client PODs.

(3) Have exposed HTTP socket as a service so POST is working fine to http://sensu:3031

So far so good. Thanks Joe.

Regards.

@shankerbalan

···

On 22-Dec-2017, at 9:08 PM, Joe Miller joeym@joeym.net wrote:

On Thu, Dec 21, 2017 at 9:33 PM mail@shankerbalan.net mail@shankerbalan.net wrote:

Hi Joe,

Thanks for taking the time to reply. Really appreciate it.

My comments inline…

On 21-Dec-2017, at 5:16 AM, Joe Miller joeym@joeym.net wrote:

RE: the base container - We built our own as it seemed simpler than trying to work with the other ones we saw out in the wild. It’s public here - https://github.com/pantheon-systems/docker-sensu - but it is minimal and may not work at all for your environment. It’s based on debian:jessie. I did not try alpine. I’d expect the usual alpine gotchas such as musl -vs- gnu libc

I went thru the Dockerfile, this should work well for my use case as well.

RE: daemonset - For me this would depend on what we were monitoring. If we needed to monitor the state of the CoreOS node itself (ie: “underneath Kubernetes”) then the DaemonSet seems like the right approach so that you end up with one pod per node. I imagine you’ll end up having to mount various paths from the hosts if you intend to monitor host state. Network namespacing may become an issue depending on what check scripts you intend to run too.

My current needs are to monitor CoreOS host state only - disk, mem, cpu, CRIT events etc. I’ll deploy as a DaemonSet and

then course correct as needed.

There is an interesting note in this article that you may run into if you intend to monitor mounts from sensu in this setup: -https://blog.argoproj.io/volume-monitoring-in-kubernetes-with-prometheus-3a185e4c4035

quoted:

After setting up Prometheus and node-exporters, we are able to see metrics

for any existing volumes mounted on the various Kubernetes minion nodes.

However, we quickly >found out that any new volumes are not showing up in

the Prometheus monitoring metrics. **After some investigation, we found that **

**> when a container starts, it inherits a cloned copy of host’s kernel mount **

> namespace, but this copy is not updated as new volumes are mounted.

Thus, any new volumes mounted after node-exporter starts will not be visible to node-exporter.

Interesttng problem. Need to check how prometheus-operator handles it. I do have

node-exporter running as a deaemonset.

But if you just want to provide some sensu services “inside Kubernetes” such as to other apps running in Kube to POST /results to sensu, I may suggest a simple Deployment+Service approach. This is what we do at the moment. It provides a couple services:

  • Exposes an internal clusterIP service for other applications to POST alerts. (http://sensu-client.production, since we have it running in the production namespace)

We also provide a dummy http://sensu-client service in developer’s sandbox namespaces so they can test POSTing sensu events. It simply accepts anything POST’d to its /results URL and logs to stdout. This was easier than standing up a full set of sensu services in each dev namespace. This isn’t public but we could open it up as it’s just a few lines of Go.

This internal sensu-client service works for our uses (so far) but we haven’t done a lot with it yet either.

I am in conversation with my engineering team team to throw exceptions to the sensu endpoint. So I would need to support

clusterIP as well for POST actions.

Your reply has been very helpful. I’ll share updates once I get things running.

Thanks.

@shankerbalan

Awesome!

(Sorry for the delay, I’m traveling for the holidays)

RE: the labels – an initContainer may work. I admittedly haven’t made much use of these yet. In most cases I have actually been able to use a bit of shell to wrap the startup of the binary. A simple example wrapping the official hashi vault container to set the vault cluster name to the namespace name. In this example the vault.hcl is actually a configmap too and it is modified at run-time just before exec’ing /bin/vault:

spec:

containers:

  • name: vault

image: vault:0.7.2

imagePullPolicy: “IfNotPresent”

command:

  • “/bin/sh”

  • “-ec”

  • |

inject namespace into vault.hcl config file at run-time to set the cluster-name

sed -i -e “/^cluster_name./c\cluster_name = "vault-${KUBE_NAMESPACE}"” /vault/config/vault.hcl

start vault

exec /bin/vault server -config /vault/config/vault.hcl

env:

  • name: KUBE_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

···

On Fri, Dec 29, 2017 at 7:10 AM mail@shankerbalan.net mail@shankerbalan.net wrote:

Hi Joe,

Comments inline.

On 22-Dec-2017, at 9:08 PM, Joe Miller joeym@joeym.net wrote:

Awesome! I’d love to hear how your team makes use of Sensu with kubernetes. We are using GKE to manage the cluster itself so don’t have as much need to monitor the host state but we’re always looking for ways to provide sensu services to apps running inside the cluster.

We are currently on bare-metal/VMs but are making a organisational shift from colo

to AWS Mumbai + EKS (yet to be launched in AWS Mumbai). Till such time, need to manage

things the hard way.

I currently have a daemonSet deployed and basic checks are in place. I used your Dockerfile

with slight modifications to add monitoring-plugins and sensu-plugins to sensu-client Docker

image.

(1) Templating for client.json - I need to modify subscriptions, FQDN, environment etc

based on Node labels. Kubernetes configMap does not seem to allow for variable substitution

so I guess I need to use initContainer features to make config files

(2) I am yet to figure out how to expose the namespace mounts inside sensu-client PODs.

(3) Have exposed HTTP socket as a service so POST is working fine to http://sensu:3031

So far so good. Thanks Joe.

Regards.

@shankerbalan

On Thu, Dec 21, 2017 at 9:33 PM mail@shankerbalan.net mail@shankerbalan.net wrote:

Hi Joe,

Thanks for taking the time to reply. Really appreciate it.

My comments inline…

On 21-Dec-2017, at 5:16 AM, Joe Miller joeym@joeym.net wrote:

RE: the base container - We built our own as it seemed simpler than trying to work with the other ones we saw out in the wild. It’s public here - https://github.com/pantheon-systems/docker-sensu - but it is minimal and may not work at all for your environment. It’s based on debian:jessie. I did not try alpine. I’d expect the usual alpine gotchas such as musl -vs- gnu libc

I went thru the Dockerfile, this should work well for my use case as well.

RE: daemonset - For me this would depend on what we were monitoring. If we needed to monitor the state of the CoreOS node itself (ie: “underneath Kubernetes”) then the DaemonSet seems like the right approach so that you end up with one pod per node. I imagine you’ll end up having to mount various paths from the hosts if you intend to monitor host state. Network namespacing may become an issue depending on what check scripts you intend to run too.

My current needs are to monitor CoreOS host state only - disk, mem, cpu, CRIT events etc. I’ll deploy as a DaemonSet and

then course correct as needed.

There is an interesting note in this article that you may run into if you intend to monitor mounts from sensu in this setup: -https://blog.argoproj.io/volume-monitoring-in-kubernetes-with-prometheus-3a185e4c4035

quoted:

After setting up Prometheus and node-exporters, we are able to see metrics

for any existing volumes mounted on the various Kubernetes minion nodes.

However, we quickly >found out that any new volumes are not showing up in

the Prometheus monitoring metrics. **After some investigation, we found that **

**> when a container starts, it inherits a cloned copy of host’s kernel mount **

> namespace, but this copy is not updated as new volumes are mounted.

Thus, any new volumes mounted after node-exporter starts will not be visible to node-exporter.

Interesttng problem. Need to check how prometheus-operator handles it. I do have

node-exporter running as a deaemonset.

But if you just want to provide some sensu services “inside Kubernetes” such as to other apps running in Kube to POST /results to sensu, I may suggest a simple Deployment+Service approach. This is what we do at the moment. It provides a couple services:

  • Exposes an internal clusterIP service for other applications to POST alerts. (http://sensu-client.production, since we have it running in the production namespace)

We also provide a dummy http://sensu-client service in developer’s sandbox namespaces so they can test POSTing sensu events. It simply accepts anything POST’d to its /results URL and logs to stdout. This was easier than standing up a full set of sensu services in each dev namespace. This isn’t public but we could open it up as it’s just a few lines of Go.

This internal sensu-client service works for our uses (so far) but we haven’t done a lot with it yet either.

I am in conversation with my engineering team team to throw exceptions to the sensu endpoint. So I would need to support

clusterIP as well for POST actions.

Your reply has been very helpful. I’ll share updates once I get things running.

Thanks.

@shankerbalan