Check is flapping when using proxy_requests

Hi,

When I add a proxy_requests on a check, this one begins to flap, if I remove proxy_requests check does not flap anymore.

My check looks like that :

---
type: CheckConfig
api_version: core/v2
metadata:
  name: clamav-freshclam-process
  namespace: prod
  annotations:
    sensu.io/plugins/slack/config/channel: '#sensu-go-prod'
spec:
  command: check-process.rb -p 'freshclam' -u clamav -w 1 -W 1 -c 1 -C 1
  interval: 60
  publish: true
  proxy_requests:
    entity_attributes:
    - entity.labels.role != 'esdata'
    - entity.labels.role != 'esmaster'
  handlers:
  - slack
  runtime_assets:
  - sensu-plugins-process-check
  - sensu-ruby-runtime
  subscriptions:
  - system

In agents logs we can see lot of calls when check is triggered with proxy_requests set otherwise this is not the case.
without proxy_requests :

May 25 13:57:52 my-server sensu-agent[98272]: {"component":"agent","level":"info","msg":"scheduling check execution: clamav-freshclam-process","time":"2020-05-25T13:57:52Z"}
May 25 13:57:52 my-server sensu-agent[98272]: {"assets":["sensu-plugins-process-check","sensu-ruby-runtime"],"check":"clamav-freshclam-process","component":"agent","level":"debug","msg":"fetching assets for check","namespace":"prod","time":"2020-05-25T13:57:52Z"}
May 25 13:57:52 my-server sensu-agent[98272]: {"check":"clamav-freshclam-process","component":"agent","entity":"my-server","event_uuid":"f3e93270-b65d-4e87-a979-86b57d8a09c4","level":"info","msg":"sending event to backend","time":"2020-05-25T13:57:52Z"}

with proxy_requests :
logs.json (38.5 KB)

Is it a normal behavior ?

We are running sensu backend 5.20.0 on kubernetes with 3 replicas.
Agents are on Debian Buster on 5.20.1-12427 version.

Thanks

same issue here, same version (5.20.1 on server and agent), kind of same setup (not on containers, but on 3 VMs).

well, in my case, it was different… or maybe it’s somehow similar.
My check was using MIB files for SNMP, the check is setup with Round-robin, and on one of the 3 servers the user sensu was not able to access the MIB, due to wrong permissions.
If you have round-robin for this check, you may want to check if it on all your containers.

I was just going to say the same - it sounds like the check is set to round robin but the check is failing on one of the subscribers.

It does not seem to be the same issue.
As described in the post, the check is not set to round robin and logs are those of one agent, but the behavior in the same on each agent which trigger this check.

Also, as soon as i activate proxy_requests to the check, servers where agent is running begin to load, and stop when i desactivate proxy_requests :
grafana_sensu