On Mon, Dec 5, 2016 at 4:25 PM, Tim Flink <tflink@redhat.com> wrote:
Is there a way we could export the results as a json file or something
similar? If there is (or if it could be added without too much
trouble), we would have multiple options:

Sure, adding some kind of export should be doable
 

1. Dump the contents of the current db, do a partial offline migration
   and finish it during the upgrade outage by export/importing the
   newest data, deleting the production db and importing the offline
   upgraded db. If that still takes too long, create a second postgres
   db containing the offline upgrade, switchover during the outage and
   import the new results since the db was copied.


I slept just two hours, so this is a bit entangled for me. So - my initial idea was, that we
 - dump the database
 - delete most of the results
 - do migration on the small data set

In paralel (or later on), we would
 - create a second database (let's call it 'archive')
 - import the un-migrated dump
 - remove data that is in the production db
 - run the lenghty migration

This way, we have minimal downtime, and the data are available in the 'archive' db,

With the archive db, we could either
1) dump the data and then import it to the prod db (again no down-time)
2) just spawn another resultsdb (archives.resultsdb?) instance, that would operate on top of the archives

I'd rather do the second, since it also has the benefit of being able to offload old data
to the 'archive' database (which would/could be 'slow by definition'), while keeping the 'active' dataset
small enough, that it could all be in memory for fast queries,.

What do you think? I guess we wanted to do something pretty similar, I just got lost a bit in what you wrote :)

 
2. If the import/export process is fast enough, might be able to do
   instead of the inplace migration

My gut feeling is that it would be pretty slow, but I have no relevant experience.

Joza