I have a scenario I'd like input on. For example, you have two servers S1 S2 handing 6 requests per minute, then add a third (at T1) handling 4 per minute. This is what the data looks like:

   T0    T1
------------------
S1  6    4
S2  6    4
S3  x    4

x represents no data at this time (since the server was not alive.)

RHQ calculates the per-server average as 6 for T0 and 4 for T1. Which is okay, but really there was no additional requests on the network for the group, so you'd expect the graph to be flat. So the graph actually dips at/before T1.

I'm wondering if this should either be corrected in some way so that either you get the count at T0 and T1 for the number of data points found, or the average is calculated based on the total number of schedules, not number of data points found.