On Fri, 25 Nov 2016 13:01:51 +0100 Josef Skladanka jskladan@redhat.com wrote:
So, I have performed the migration on DEV - there were some problems with it going out of memory, so I had to tweak it a bit (please have a look at D1059, that is what I ended up using by hot-fixing on DEV).
There still is a slight problem, though - the migration of DEV took about 12 hours total, which is a bit unreasonable. Most of the time was spent in `alembic/versions/dbfab576c81_change_schema_to_v2_0_step_2.py` lines 84-93 in D1059. The code takes about 5 seconds to change 1k results. That would mean at least 15 hours of downtime on PROD, and that, I think is unreal...
And since I don't know how to make it faster (tips are most welcomed), I suggest that we archive most of the data in STG/PROD before we go forward with the migration. I'd make a complete backup, and deleted all but the data from the last 3 months (or any other reasonable time span).
We can then populate an "archive" database, and migrate it on its own, should we decide it is worth it (I don't think it is).
What do you think?
While it would be nice to not lose (in the sense that it wouldn't be readily available) all that old data, 15 hours does seem a bit extreme.
Is there a way we could export the results as a json file or something similar? If there is (or if it could be added without too much trouble), we would have multiple options:
1. Dump the contents of the current db, do a partial offline migration and finish it during the upgrade outage by export/importing the newest data, deleting the production db and importing the offline upgraded db. If that still takes too long, create a second postgres db containing the offline upgrade, switchover during the outage and import the new results since the db was copied.
2. If the import/export process is fast enough, might be able to do instead of the inplace migration
Thoughts on either of these options?
Tim