On Tue, Aug 07, 2018 at 08:50:25AM +0000, Michael Schroeder wrote:
Yep, that sounds like an excellent idea.
That's a good question. The time window where this can happen is not
be that big, because without filelists the loading of metadata is
quicker. But it's still non-zero, so given enough machines and enough
updates, it'll be hit occasionally.
"Occasionally" is a pleasant euphemism.
A dirty solution would be to simply error out, not nice.
I think the best solution if part of the meta-data cannot be
downloaded, is to restart the download of metadata in non-lazy mode. In
other words, if the lazy approach fails, repeat the process exactly
like it is done today.
There is no other solution to conflicting tasks accessing identical paths.
Well you could put a file with a time stamp (to be able to detect stale)in the same
directory, and then teach down loaders to honor the lock, or rename a directory in place
after updating, but both are tricky to implement with rsync mirroring.
One thing that mitigates this issue is that we have multiple
mirrors,
and they cannot be all updated at the same time, so some mirrors will
carry "stale" metadata, and dnf should be able to hit some other mirror
that still has the old filelists. Thus, I think it should be OK to start
with the "dirty solution", if implementing the fallback is complicated,
and implement the fallback later.
One could also map the filelist lookup into a http based service instead of downloading
the entire list, lazily or not, with compression or not, with client cache management or
not.
Rsync mirroring of package metadata is very last century.
But "occasionally" downloading file lists does not solve the problems stated,
either then, or now.
Till next time the issue is discussed ...
73 de Jeff
> Zbyszek