CPU usage spikes in RHQ agent

Lukas Krejci lkrejci at redhat.com
Tue Feb 14 14:07:19 UTC 2012


Hi all,

recently I've been thinking about how we could go about changing the way we 
execute different tasks in the plugin container in agent so that we could 
avoid or at least limit the CPU spikes we are experiencing with larger 
inventories and/or heavy plugins (like AS5 plugin).

First, let's re-cap why are CPU spikes bad on the agent:
- They eat the cpu cycles that could be used by the managed resources
- the agent can actually render the machine unusable if the second spike
  in usage kicks in before the first ends. If they are both scheduled (which
  they usually are in agent, we never get out of high CPU usage).

The first, and I think the most important realization is that we do the 
various scans wrongly from the CPU usage perspective. Currently, there is a 
scheduled job for each, that will kick off one huge task that will recursively 
traverse the inventory tree and will execute appropriate scan (avail, config,  
discovery, ...) on one resource at a time.

Why is this wrong? Let's examine a (fabricated) CPU usage graph over time:

100% <- scan kicks in
100%
100%
0%   <- scan finished
0%
0%
0%
100% <- another scan kicks in
100%
100%

You can notice 2 things there: 
- we consume as much CPU as possible in as short time as possible to finish 
the scan.
- after that there is a period of "silence"

We have this notion of availability, configuration, discovery, etc. scan 
periods. These define the periods in which those huge tasks will be executed. 
But from the user point of view, there isn't much difference between "all 
resources were checked for avail @ 12.00am and then @ 12.05am and then 
12.10am" and "resource A was checked @ 12.00am, resource B @ 12.01am, 
resource C @ 12.04am, resource A @ 12.05am, resource B @ 12.06am, resource C @ 
12.09am". Notice that from the POV of the individual resources, the check 
interval is 5 minutes in both cases which I think is the crucial part.

The latter approach allows us to "spread" the execution of the individual 
resource checks in the defined period. This will immediately improve the CPU 
usage situation because the spikes (or rather the amount of work) will be 
spread throughout the whole period available, we won't be rushing to get 
everything done as quickly as possible and "have a nap" afterwards.

This of course doesn't solve the problem of overlapping schedules - it only 
makes the ramp up of the CPU usage slower in that case. It also doesn't solve 
the situation where there is simply too much work to be done at any single 
time, generating a constant high CPU load. But there are more tricks we can do 
to further improve the situation. 

To cap the CPU load, we can try to use the ThreadMXBean available in Java. 
This can (optionally) provide the CPU time measurements to the java code. 
While this might not be implemented by all JVMs, at least the Sun JVM and 
OpenJDK on Linux implement it. Using that, we can insert "gaps" in between 
individual resource checks during the scan to lower the overall CPU usage (of 
course, at the expense of increased scan duration).

The final trick we can use is to provide the agent plugins with a way of 
asking "Am I consuming too much CPU? Should I take a break for a while?". This 
could be used during computationally intensive tasks inside the plugin code 
(this is akin to the Thread.isInterrupted() method in Java - in general it is 
not good practice to force some behavior on the thread of execution (i.e. 
forcibly kill a thread) - threads should rather cooperate).

So with all these tricks up our the sleeve, what are the consequences for the 
plugin container itself, the plugins and last but not least the agent and its 
behavior and configurability as a whole?

First of all, we have to forget about guaranteeing any kind of regularity in 
the scan periods. All the approaches above are approximate and are not able to 
give any hard guarantees - be it for period duration or CPU usage. On the 
other hand, we could not promise anything in our current approach either.

I think the key is to provide more visibility of performance problems through 
metrics of the agent resource and give the user more control over the 
performance tuning of the agent itself. If the user is able to set the target 
CPU usage and periods for different scans, we can easily measure the "health" 
of the system (and maybe even provide calltime metrics to make the performance 
metrics specific to resource types - i.e. the user would be able to see that 
it is indeed the AS5 plugin that eats up most of the discovery time and could 
update the schedules accordingly). At this point I think I start to cross with 
the recent discussions in the team about providing per-resource(-type) 
schedules for avail and discovery and I admit that I don't have an opinion on 
the compatibility of those ideas with this text.

On the other hand, I went ahead and actually implemented the above ideas in 
code  https://github.com/metlos/throttling-executors
The code is a bit rough, lacks some tests and has some scheduling issues in 
high concurrency (I tried to implement the scheduling lock free, but was a bit 
naive at that  ). But it does show the point. There is a benchmark in the 
tests that runs the different executors in different scenarios and tries to 
establish a) the overhead the extra logic for task spreading and CPU 
throttling poses over the "normal" ThreadPoolExecutor and b) the behavior of 
the executors in different scenarios.

I would make this already too long email unbearably boring if I included the 
discussion of the benchmark results here, so I'll leave that for a standalone 
followup email.

So what are your thoughts about the above ideas? How does it blend with the 
plans for the avail and discovery scheduling changes?

Cheers,

Lukas


More information about the rhq-devel mailing list