Metrics Migration Tool - Cassandra

Thomas Segismont tsegismo at redhat.com
Thu Jan 17 22:41:22 UTC 2013


Le 14/01/2013 14:52, Stefan Negrea a écrit :
> Hello Everybody,
>
> I updated the design wiki [1] with estimates for the amount of data to be migrated. The estimates show that even for relatively small deployments there is a non-trivial amount of data to be moved from the relational database to Cassandra. For example, on a system with 10 agents (a small deployment) the estimates show about 0.5 GB or about 16 million rows of data to be migrated. For a larger deployment with 125 agents, the estimates came to 6GB or 197 million rows of data.
>
> So far the migration process design is:
> 1) Read a batch of data
> 2) Insert data into Cassandra
> 3) Delete data from relational database
>
> Lets consider that the deletion process is optimized and takes a relatively trivial amount of time compared to reading or inserting. That means the amount of data to be processed by the migration is 2 times the estimate. For example, 0.5 GB read + 0.5 GB inserted in the case of small deployments.
>
> I am almost done with a random data generator that matches these estimates. That will help with migration benchmarks early in the development process to adjust the design/plan if necessary.
>
>
> How do these estimates look? Do these numbers change the perspective on the complexity of the task?
>
>
> [1] https://docs.jboss.org/author/display/RHQ/Metrics+Data+Migration+-+Design
>
> ----- Original Message -----
>> From: "Thomas Segismont" <tsegismo at redhat.com>
>> To: rhq-devel at lists.fedorahosted.org, rhq-users at lists.fedorahosted.org
>> Sent: Thursday, January 10, 2013 11:02:07 AM
>> Subject: Re: Metrics Migration Tool - Cassandra
>>
>> Le 09/01/2013 20:32, John Sanda a écrit :
>>> At this point, all we can do is speculate about how long the
>>> migration will actually take until we do some load testing. If we
>>> find that the migration is taking longer than we would like,
>>> another option could be to explore using the bulk import/export
>>> utilities provided by each of the databases.
>>
>> I think working on bulk export files would be far more efficient. And
>> it
>> shouldn't be too difficult given the measurement tables have very
>> simple
>> schema (migrating to Cassandra may not be as simple as migrating
>> these
>> tables data though).
>>
>> So why not having the two mechanisms:
>> 1. batching with Hibernate which would support a larger number of
>> deployments (Postgres, Oracle, SQLServer)
>
> The current plan is to primarily support batched operations for data migration. Things will get a tad speedier on Cassandra side because of async inserts.
>
>> 2. batching with bulk export files for the supported databases
>> (Postgres, Oracle)
>>
>> I know it's double code, test and support but I really doubt #1 can
>> handle large amounts of data in less than a few hours.
>
> I am not sure if that is feasible. With export files the data will be processed 4 times: read from relational, write to export file, read from export file, write to Cassandra. For a deployment with about 2GB of data (which I think will be closer to the average deployment size) that will be become 8GB of data processed.
>
>>
>> And you're right, we cannot speculate on this and I don't believe we
>> could make a release without actually trying the tool on different
>> workloads.
>
> I was hesitant to reply without having numbers. Hopefully with these estimates and a random data generator we will get a better picture for the length and complexity of the process.
>
> Do these estimates change your mind regarding the export files approach?
>
>>
>> Thomas
>> _______________________________________________
>> rhq-devel mailing list
>> rhq-devel at lists.fedorahosted.org
>> https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
>>
> _______________________________________________
> rhq-devel mailing list
> rhq-devel at lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
>

Stefan,

Thanks for the workload estimates.

Attached you'll find a test class which:
* creates a table with same structure and indexes as measurement tables
* generates a file of N lines of data
* inserts the lines
* download the table contents in batch mode and bulk mode

Here is the test output for 10 million rows (more or less the number of 
lines per measurement table for the case you identified as large 
deployment):

Current run: 1358430147258
Generating data file
Generated a 10000000 lines data file (473 Mo) in 0 minute(s), 30 
second(s), 906 millisecond(s)
Preparing test table
Prepared test table in 0 minute(s), 0 second(s), 118 millisecond(s)
Loading data
Loaded 10000000 lines in 428 minute(s), 38 second(s), 420 millisecond(s)
Downloading data in batch mode
Downloaded 10000000 lines in batch mode in 41 minute(s), 37 second(s), 
154 millisecond(s)
Downloading data in bulk mode
Downloaded 10000000 lines in bulk mode in 0 minute(s), 10 second(s), 804 
millisecond(s)
Cleaning test table
Cleaned test table in 0 minute(s), 0 second(s), 352 millisecond(s)

As far as I know, the bulk download is much faster than the batch 
download because it's optimized for this purpose (no calls to query 
engine, optimizer, storage layer, just raw read of data files).

If I understood well the back end design notes 
(https://docs.jboss.org/author/display/RHQ/Cassandra+Back+End+Design+Notes), 
the migration process will involve moving each line of raw measurement 
tables to the raw_metrics column family (maybe same for 1h ... etc 
tables?). In this case I still think that reading each line of a bulk 
downloaded file would be more reasonable to minimize downtime.

Otherwise ... please forget about my emails :)

Cheers
Thomas

-------------- next part --------------
A non-text attachment was scrubbed...
Name: MeasurementReadPerformanceTest.java
Type: text/x-java
Size: 8167 bytes
Desc: not available
URL: <https://lists.fedorahosted.org/pipermail/rhq-devel/attachments/20130117/45c32776/attachment-0001.bin>


More information about the rhq-devel mailing list