Hi Joe,
Thanks for taking the time to reply. Really appreciate it.
My comments inline…
RE: the base container - We built our own as it seemed simpler than trying to work with the other ones we saw out in the wild. It’s public here - https://github.com/pantheon-systems/docker-sensu - but it is minimal and may not work at all for your environment. It’s based on debian:jessie. I did not try alpine. I’d expect the usual alpine gotchas such as musl -vs- gnu libc
I went thru the Dockerfile, this should work well for my use case as well.
RE: daemonset - For me this would depend on what we were monitoring. If we needed to monitor the state of the CoreOS node itself (ie: “underneath Kubernetes”) then the DaemonSet seems like the right approach so that you end up with one pod per node. I imagine you’ll end up having to mount various paths from the hosts if you intend to monitor host state. Network namespacing may become an issue depending on what check scripts you intend to run too.
My current needs are to monitor CoreOS host state only - disk, mem, cpu, CRIT events etc. I’ll deploy as a DaemonSet and
then course correct as needed.
There is an interesting note in this article that you may run into if you intend to monitor mounts from sensu in this setup: -https://blog.argoproj.io/volume-monitoring-in-kubernetes-with-prometheus-3a185e4c4035
quoted:
After setting up Prometheus and node-exporters, we are able to see metrics
for any existing volumes mounted on the various Kubernetes minion nodes.
However, we quickly >found out that any new volumes are not showing up in
the Prometheus monitoring metrics. **After some investigation, we found that **
**> when a container starts, it inherits a cloned copy of host’s kernel mount **
> namespace, but this copy is not updated as new volumes are mounted.
Thus, any new volumes mounted after node-exporter starts will not be visible to node-exporter.
Interesttng problem. Need to check how prometheus-operator handles it. I do have
node-exporter running as a deaemonset.
But if you just want to provide some sensu services “inside Kubernetes” such as to other apps running in Kube to POST /results to sensu, I may suggest a simple Deployment+Service approach. This is what we do at the moment. It provides a couple services:
- Exposes an internal clusterIP service for other applications to POST alerts. (http://sensu-client.production, since we have it running in the production namespace)
We also provide a dummy http://sensu-client service in developer’s sandbox namespaces so they can test POSTing sensu events. It simply accepts anything POST’d to its /results URL and logs to stdout. This was easier than standing up a full set of sensu services in each dev namespace. This isn’t public but we could open it up as it’s just a few lines of Go.
This internal sensu-client service works for our uses (so far) but we haven’t done a lot with it yet either.
I am in conversation with my engineering team team to throw exceptions to the sensu endpoint. So I would need to support
clusterIP as well for POST actions.
Your reply has been very helpful. I’ll share updates once I get things running.
Thanks.
@shankerbalan
···
On 21-Dec-2017, at 5:16 AM, Joe Miller joeym@joeym.net wrote: