Help. How can I config Sensu For Hadoop

The Hadoop has multi DataNode, they all listened to a TCP port .

When I used Nagios , I only need to setted up one nagios-Server, one Nagio-client. And It use “check_tcp” to check each DataNode (Each nodes has a same check)

But How can I config Sensu For Hadoop without installing clients on the DataNodes?

I think is there any way to define:

checks:

“”"

check_data_nodes:

cmd : check_tcp $HOST

check_aggregate:

cmd: check_aggregate check_data_nodes

“”"

the only one clients:

“”"

HOST:

192.168.1.1

192.168.1.2

192.168.1.3

“”"

Thank you very much.

Hi Kay,

Sensu makes the assumption that anything that can* run the Sensu client, does, as it provides keepalive/heartbeat functionality, as well as an check execution platform

. You could run the Sensu clients and use a check definition that targets the Hadoop DataNode subscription, a check definition that specified the local tcp socket. If you do not wish to run the clients, you could create a check definition for each DataNode or modify check_tcp to take an array of socket hosts to check, and make it a standalone check on a single node or use a subscription with only one consumer.

Sean.

···

On Sun, Feb 16, 2014 at 7:32 AM, Kay Yan yankay.com@gmail.com wrote:

The Hadoop has multi DataNode, they all listened to a TCP port .

When I used Nagios , I only need to setted up one nagios-Server, one Nagio-client. And It use “check_tcp” to check each DataNode (Each nodes has a same check)

But How can I config Sensu For Hadoop without installing clients on the DataNodes?

I think is there any way to define:

checks:

“”"

check_data_nodes:

cmd : check_tcp $HOST

check_aggregate:

cmd: check_aggregate check_data_nodes

“”"

the only one clients:

“”"

HOST:

192.168.1.1

192.168.1.2

192.168.1.3

“”"

Thank you very much.