Is there a possible memory leak in sensu v 0.26.5-2?

In production we use 0.16. Recently, after upgrading to 0.26.5-2, we started seeing abnormal memory drops to the point that it requires a service restart.

I’ve attached a free memory graph for the one of the sensu servers in the system. The black line that I’ve edited in with very poor skills is for when we’re running 0.16 and everything is fine. The red line is for when we upgraded to 0.26.5-2. The difference in memory usage is pretty noticeable.

You can tell from the graph that we upgraded twice. During the first upgrade, we were using the built in ‘occurrences’ filter. In our second attempt to upgrade, we created our own custom filters. The memory leak persisted and we had to downgrade again.

Hi fhd, thanks for reporting this observation.

As you may imagine, the delta between 0.16 (released circa October, 2014) and 0.26.5 (released circa October, 2016) is quite large, so it’s difficult to narrow this down based on the information at hand.

Until recently the Upgrading notes were missing from our documentation site, but we’ve recently restored them. Please see https://sensuapp.org/docs/latest/installation/upgrading.html where you’ll find recommendations for upgrading from < 0.17 include performing a flushall on the Redis database.

Was this part of your upgrade procedure? If not, will you please try upgrading again with these instructions and keep us apprised of the outcome?

Thanks!

Cameron

···

On Tuesday, March 21, 2017 at 5:19:34 AM UTC-6, fhd wrote:

In production we use 0.16. Recently, after upgrading to 0.26.5-2, we started seeing abnormal memory drops to the point that it requires a service restart.

I’ve attached a free memory graph for the one of the sensu servers in the system. The black line that I’ve edited in with very poor skills is for when we’re running 0.16 and everything is fine. The red line is for when we upgraded to 0.26.5-2. The difference in memory usage is pretty noticeable.

You can tell from the graph that we upgraded twice. During the first upgrade, we were using the built in ‘occurrences’ filter. In our second attempt to upgrade, we created our own custom filters. The memory leak persisted and we had to downgrade again.

Unfortunately, I do not recall flushing redis, but apart from that, our upgrade process was the same as in the doc you provided. While we don’t have plans to upgrade anytime soon, I’ll try to make a case for it since 0.28 is out. Will let you know the outcome, if and when we upgrade again.

···

On Wednesday, March 29, 2017 at 8:34:05 PM UTC+5, Cameron Johnston wrote:

Hi fhd, thanks for reporting this observation.

As you may imagine, the delta between 0.16 (released circa October, 2014) and 0.26.5 (released circa October, 2016) is quite large, so it’s difficult to narrow this down based on the information at hand.

Until recently the Upgrading notes were missing from our documentation site, but we’ve recently restored them. Please see https://sensuapp.org/docs/latest/installation/upgrading.html where you’ll find recommendations for upgrading from < 0.17 include performing a flushall on the Redis database.

Was this part of your upgrade procedure? If not, will you please try upgrading again with these instructions and keep us apprised of the outcome?

Thanks!

Cameron

On Tuesday, March 21, 2017 at 5:19:34 AM UTC-6, fhd wrote:

In production we use 0.16. Recently, after upgrading to 0.26.5-2, we started seeing abnormal memory drops to the point that it requires a service restart.

I’ve attached a free memory graph for the one of the sensu servers in the system. The black line that I’ve edited in with very poor skills is for when we’re running 0.16 and everything is fine. The red line is for when we upgraded to 0.26.5-2. The difference in memory usage is pretty noticeable.

You can tell from the graph that we upgraded twice. During the first upgrade, we were using the built in ‘occurrences’ filter. In our second attempt to upgrade, we created our own custom filters. The memory leak persisted and we had to downgrade again.