metrics data loss

John Sanda jsanda at redhat.com
Tue Apr 8 14:48:12 UTC 2014


On Apr 8, 2014, at 9:22 AM, Jay Shaughnessy <jshaughn at redhat.com> wrote:

> 
> On 4/7/2014 4:12 PM, John Sanda wrote:
>> On Apr 7, 2014, at 3:27 PM, Jay Shaughnessy <jshaughn at redhat.com> wrote:
>> 
>>> Option 1 would certainly make the most sense if users confirmed that some metric data loss was acceptable given catastrophic failure of the storage cluster.  Out of curiosity, did we have the potential for data loss with the RDB storage?  Or did the Tx fail go back to the comm layer and force the resend?  The whole idea of the transaction-less writes is, I thought, to gain speed at the potential expense of some acceptable data loss.  i could certainly be wrong about that.
>> I am not 100% clear about the old implementation. If the call to PreparedStatement.executeBatch() returned a non-success code, we threw a MeasuremenStorageException which would roll back the transaction and trigger the resend. But if there was some other database error that caused a SQLException, we just logged it which could then result in data loss I believe.
>> 
>> Transaction-less writes does not necessarily mean or have to mean data data. Nor does preventing data loss necessarily imply a loss of speed. The key is making writes idempontent. If we execute a write to store metric data multiple times, the CQL row is simply overwritten each time assuming the schedule id and timestamp are the same on each write. There is however a speed tradeoff with consistency. If we decided to use quorum consistency  on raw data writes, it would definitely impact write performance since we are waiting for more nodes to ack the write.
>> 
>>> I guess one major question is whether the alerting still takes place on the data despite the storage loss?  if the alerting is guaranteed then I think data storage loss may be more palatable.  In fact, we'd need to ensure, given option 2, that alerting did not happen twice. In general option 2 does not seem at all attractive.
>> Currently we only update the alert cache for the raw data writes that succeed. Since we alert on the raw data, I think it makes sense to only alert on current data. If an agent sends a report that is 3 hours old, I don't think it makes sense to notify the cache for that data even if all the writes succeed.
> 
> Hmm, I'm not sure we should be selective.  Perhaps the reason it's 3 hours late has something to do with the data being sent.  I think I'd lean towards not editing in that fashion but I'm really not sure.
> 
>> 
>>> What happens exactly when the storage cluster goes down?  How quickly do we stop processing metric data requests?  Is there data loss on a simple server shutdown, when storage actually stays up?
>> If we detect that the cluster is down, we put the server into maintenance mode; however, there is a 30 second for this to happen. On a typical server shutdown I wouldn't expect data loss. Once the server comm layer is shut down, the agent will start spooling data.
>> 
>>> Is there an option 4 where the server spools the unwritten data to local tmp space and then writes it later, not involving the agent at all?
>> This was discussed briefly, but I think it is a bad idea. For a small number of agents it might be fine, but I think it becomes problematic as the number of agents increase.
> 
> I'm not sure I understand.  Are you saying the servers have enough memory to handle all of the agent metric reports coming in, but can't cache on disk partial reports that were in-progress when the storage cluster went away?

My concern is that as the number of agents increase the server could get bogged down with processing failed reports. The data is now stuck with that server. If we are running multiple servers, the agents can spread the write load, including failed reports, across those servers.

I think it will be simpler to let the client/agent manage the data. The server just notifies the client that it failed to store some data, and then let the client decide what if anything it wants to do with that data.

> 
>> 
>>> One more thought, option 1 may become less attractive when and if we move more types of data to the storage cluster.  perhaps something more critical than a micro-percentage of metric data.
>> This discussion is definitely specific to metric data. If we were talking about resource configuration data, not only do I think we would want to ensure we avoid data loss but we might also want to use stronger consistency.
>>> 
>>> Perhaps we could use option1 by default but make it configurable to use option3?
>> I could see that, although I wonder if option 3 might be the safer default.
>> 
>>> On 4/7/2014 2:10 PM, John Sanda wrote:
>>>> Currently there exists the possibility of numeric data loss when merging measurement reports. If there is an error storing raw data, we log the error but do nothing else. Suppose for example that while the server is storing a set of raw data, the storage cluster goes down half way through. In this scenario it is likely that the latter half of that data is lost. There has been some recent discussion about the potential for data loss, and I want to open it  up to the list for additional thoughts, opinions, etc. I will briefly summarize a few options for dealing with data loss.
>>>> 
>>>> * option 1 - do nothing
>>>> The case can be made that loss of metric data may not be as significant as losing inventory or configuration data for example. If the data loss is limited to a single measurement report or subset thereof, then it probably is not very significant since we are dealing with loss of a single data point for some some number of schedules. Of course, some dropped metrics here and some dropped metrics there can quickly add up to where we are dealing with a substantial amount of data loss, and this would be bad.
>>>> 
>>>> * option 2 - Rely on agent/server comm layer guaranteed delivery
>>>> MeasurementServerService.mergeMeasurementReport(MeasurementReport report) has guaranteed delivery semantics. If the calls fails for whatever reason, the agent will retry it. The agent also spools the report to disk so that if it get disconnected from the server, it can retry after reconnecting. The downside of the guaranteed delivery is that the agent continually retries. If storing raw data failed because the storage cluster is overloaded, this could exacerbate the problem. I have actually experienced this in test environments where I was putting a heavy write load on the server and storage cluster. My server would be down or in maintenance mode for a while, and then the server comes back up, all my agents hammer the server with spooled measurement reports.
>>>> 
>>>> There is another aspect to consider in terms of efficiency. Suppose an agent sends 10,000 raw data to the server. An error occurs after storing 9,995 raw data. The agent will resend and the server will store again all 10,000. This is less than optimal and brings me to option 3.
>>>> 
>>>> option 3 - Do not overwhelm the server and only retry failed data
>>>> The server can report back to the agent the raw data that it failed to store. The agent can spool that data to disk, and resend it at some point in the future. There could be some different approaches. The agent could retry on some fixed interval, or maybe it uses some initial delay with an increasing back off, e.g., 2 minutes, 4 minutes, 8 minutes, etc. This option requires the most work, but I think that it is the most robust.
>>>> 
>>>> 
>>>> What do others think? Are there other options that should be considered?
>>>> 
>>>> 
>>>> - John
>>>> _______________________________________________
>>>> rhq-devel mailing list
>>>> rhq-devel at lists.fedorahosted.org
>>>> https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
>>> _______________________________________________
>>> rhq-devel mailing list
>>> rhq-devel at lists.fedorahosted.org
>>> https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
>> _______________________________________________
>> rhq-devel mailing list
>> rhq-devel at lists.fedorahosted.org
>> https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
> 
> _______________________________________________
> rhq-devel mailing list
> rhq-devel at lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/rhq-devel



More information about the rhq-devel mailing list