Why must the data be deleted from the RDB? That seems like it could be
inefficient. Is this the only way to keep track of the work done, or by
delete do you mean truncate/drop an entire table which is something we
will need to do anyway.
On 1/14/2013 8:52 AM, Stefan Negrea wrote:
Hello Everybody,
I updated the design wiki [1] with estimates for the amount of data to be migrated. The
estimates show that even for relatively small deployments there is a non-trivial amount of
data to be moved from the relational database to Cassandra. For example, on a system with
10 agents (a small deployment) the estimates show about 0.5 GB or about 16 million rows of
data to be migrated. For a larger deployment with 125 agents, the estimates came to 6GB or
197 million rows of data.
So far the migration process design is:
1) Read a batch of data
2) Insert data into Cassandra
3) Delete data from relational database
Lets consider that the deletion process is optimized and takes a relatively trivial
amount of time compared to reading or inserting. That means the amount of data to be
processed by the migration is 2 times the estimate. For example, 0.5 GB read + 0.5 GB
inserted in the case of small deployments.
I am almost done with a random data generator that matches these estimates. That will
help with migration benchmarks early in the development process to adjust the design/plan
if necessary.
How do these estimates look? Do these numbers change the perspective on the complexity of
the task?
[1]
https://docs.jboss.org/author/display/RHQ/Metrics+Data+Migration+-+Design
----- Original Message -----
> From: "Thomas Segismont" <tsegismo(a)redhat.com>
> To: rhq-devel(a)lists.fedorahosted.org, rhq-users(a)lists.fedorahosted.org
> Sent: Thursday, January 10, 2013 11:02:07 AM
> Subject: Re: Metrics Migration Tool - Cassandra
>
> Le 09/01/2013 20:32, John Sanda a écrit :
>> At this point, all we can do is speculate about how long the
>> migration will actually take until we do some load testing. If we
>> find that the migration is taking longer than we would like,
>> another option could be to explore using the bulk import/export
>> utilities provided by each of the databases.
> I think working on bulk export files would be far more efficient. And
> it
> shouldn't be too difficult given the measurement tables have very
> simple
> schema (migrating to Cassandra may not be as simple as migrating
> these
> tables data though).
>
> So why not having the two mechanisms:
> 1. batching with Hibernate which would support a larger number of
> deployments (Postgres, Oracle, SQLServer)
The current plan is to primarily support batched operations for data migration. Things
will get a tad speedier on Cassandra side because of async inserts.
> 2. batching with bulk export files for the supported databases
> (Postgres, Oracle)
>
> I know it's double code, test and support but I really doubt #1 can
> handle large amounts of data in less than a few hours.
I am not sure if that is feasible. With export files the data will be processed 4 times:
read from relational, write to export file, read from export file, write to Cassandra. For
a deployment with about 2GB of data (which I think will be closer to the average
deployment size) that will be become 8GB of data processed.
> And you're right, we cannot speculate on this and I don't believe we
> could make a release without actually trying the tool on different
> workloads.
I was hesitant to reply without having numbers. Hopefully with these estimates and a
random data generator we will get a better picture for the length and complexity of the
process.
Do these estimates change your mind regarding the export files approach?
> Thomas
> _______________________________________________
> rhq-devel mailing list
> rhq-devel(a)lists.fedorahosted.org
>
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel
>
_______________________________________________
rhq-devel mailing list
rhq-devel(a)lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/rhq-devel