# dnf upgrade (dnf nearly exhausts freespace downloading all packages before installing any packages) dnf then reports package xxx needs ##MB on / filesystem and exits without doing any installing dnf all deletes downloaded packages # dnf upgrade (package subset, e.g. dnf* rpm* a* b* c* d* e* f* x* y* z*) dnf downloads needed packages previously downloaded but later deleted, then installs downloaded packages # dnf upgrade dnf downloads needed packages previously downloaded but later deleted, then installs downloaded packages
Is the above deletion of freshly downloaded packages (wasted time, wasted bandwidth) known or expected?
On Thu, 2015-07-30 at 03:21 -0400, Felix Miata wrote:
# dnf upgrade (dnf nearly exhausts freespace downloading all packages before installing any packages) dnf then reports package xxx needs ##MB on / filesystem and exits without doing any installing dnf all deletes downloaded packages # dnf upgrade (package subset, e.g. dnf* rpm* a* b* c* d* e* f* x* y* z*) dnf downloads needed packages previously downloaded but later deleted, then installs downloaded packages # dnf upgrade dnf downloads needed packages previously downloaded but later deleted, then installs downloaded packages
Is the above deletion of freshly downloaded packages (wasted time, wasted bandwidth) known or expected? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation)
Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!
Felix Miata *** http://fm.no-ip.com/
you should take your words in your email signature into account before using words like "gross" and "wasted time"
Try to show some understanding to the DNF maintainers.
Johnny Robeson composed on 2015-07-30 03:25 (UTC-0400):
On Thu, 2015-07-30 at 03:21 -0400, Felix Miata wrote:
# dnf upgrade (dnf nearly exhausts freespace downloading all packages before installing any packages) dnf then reports package xxx needs ##MB on / filesystem and exits without doing any installing dnf all deletes downloaded packages # dnf upgrade (package subset, e.g. dnf* rpm* a* b* c* d* e* f* x* y* z*) dnf downloads needed packages previously downloaded but later deleted, then installs downloaded packages # dnf upgrade dnf downloads needed packages previously downloaded but later deleted, then installs downloaded packages
Is the above deletion of freshly downloaded packages (wasted time, wasted bandwidth) known or expected?
you should take your words in your email signature into account before using words like "gross" and "wasted time"
What words would you like better?
s/gross/huge/ s/wasted time/lost time/
Different words, same meaning, exact same problem.
Try to show some understanding to the DNF maintainers.
Does the time spent by people testing maintainers' work not count for anything? Not everyone has unlimited bandwidth or time for testing.
----- Original Message -----
From: "Felix Miata" mrmazda@earthlink.net To: devel@lists.fedoraproject.org Sent: Thursday, July 30, 2015 9:21:19 AM Subject: gross DNF bandwidth inefficiency if filesystem space limited
# dnf upgrade (dnf nearly exhausts freespace downloading all packages before installing any packages) dnf then reports package xxx needs ##MB on / filesystem and exits without doing any installing dnf all deletes downloaded packages # dnf upgrade (package subset, e.g. dnf* rpm* a* b* c* d* e* f* x* y* z*) dnf downloads needed packages previously downloaded but later deleted, then installs downloaded packages # dnf upgrade dnf downloads needed packages previously downloaded but later deleted, then installs downloaded packages
Is the above deletion of freshly downloaded packages (wasted time, wasted bandwidth) known or expected? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation)
Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!
Felix Miata *** http://fm.no-ip.com/
Known, https://bugzilla.redhat.com/show_bug.cgi?id=1220074. Should be fixed in dnf-1.0.2.
Radek Holy wrote:
Known, https://bugzilla.redhat.com/show_bug.cgi?id=1220074. Should be fixed in dnf-1.0.2.
I still don't understand why we don't just enable keepcache by default. Even after a successful update/install, deleting the cached packages is a major data loss because it prevents downgrading to them later, after a broken new update comes out (which also removes the previous update from the mirrors).
Kevin Kofler
On Fri, Jul 31, 2015 at 02:56:48AM +0200, Kevin Kofler wrote:
Radek Holy wrote:
Known, https://bugzilla.redhat.com/show_bug.cgi?id=1220074. Should be fixed in dnf-1.0.2.
I still don't understand why we don't just enable keepcache by default. Even after a successful update/install, deleting the cached packages is a major data loss because it prevents downgrading to them later, after a broken new update comes out (which also removes the previous update from the mirrors).
Most people don't downgrade or do it very rarely? It can be said that downgrading is an advanced operation, and you can set keepcache=1 if you need it.
Zbyszek
Am 31.07.2015 um 05:47 schrieb Zbigniew Jędrzejewski-Szmek:
On Fri, Jul 31, 2015 at 02:56:48AM +0200, Kevin Kofler wrote:
Radek Holy wrote:
Known, https://bugzilla.redhat.com/show_bug.cgi?id=1220074. Should be fixed in dnf-1.0.2.
I still don't understand why we don't just enable keepcache by default. Even after a successful update/install, deleting the cached packages is a major data loss because it prevents downgrading to them later, after a broken new update comes out (which also removes the previous update from the mirrors).
Most people don't downgrade or do it very rarely? It can be said that downgrading is an advanced operation, and you can set keepcache=1 if you need it
and then there are they cases where deps are solved but files conflicts, the transaction check fails like recently with two broken polkit updates
have fun typing "dnf --skip-broken upgrade" (yes i know NF lacks --skip-broken ATM) and download the other packages again for *zero reason*
Am 31.07.2015 um 10:46 schrieb Reindl Harald:
Am 31.07.2015 um 05:47 schrieb Zbigniew Jędrzejewski-Szmek:
On Fri, Jul 31, 2015 at 02:56:48AM +0200, Kevin Kofler wrote:
Radek Holy wrote:
Known, https://bugzilla.redhat.com/show_bug.cgi?id=1220074. Should be fixed in dnf-1.0.2.
I still don't understand why we don't just enable keepcache by default. Even after a successful update/install, deleting the cached packages is a major data loss because it prevents downgrading to them later, after a broken new update comes out (which also removes the previous update from the mirrors).
Most people don't downgrade or do it very rarely? It can be said that downgrading is an advanced operation, and you can set keepcache=1 if you need it
and then there are they cases where deps are solved but files conflicts, the transaction check fails like recently with two broken polkit updates
have fun typing "dnf --skip-broken upgrade" (yes i know NF lacks --skip-broken ATM) and download the other packages again for *zero reason*
BTW: all the akmods troubles on the rpmfusion list are coming from the "dnf-makecache.timer" locking the rpmdb at random moments, wasting traffic and i find it somehow pervert that packages are not cahced as default while on the other side a *completly needless* cache job is running for the same piece of software
On Fri, Jul 31, 2015 at 10:46:16AM +0200, Reindl Harald wrote:
Am 31.07.2015 um 05:47 schrieb Zbigniew Jędrzejewski-Szmek:
On Fri, Jul 31, 2015 at 02:56:48AM +0200, Kevin Kofler wrote:
Radek Holy wrote:
Known, https://bugzilla.redhat.com/show_bug.cgi?id=1220074. Should be fixed in dnf-1.0.2.
I still don't understand why we don't just enable keepcache by default. Even after a successful update/install, deleting the cached packages is a major data loss because it prevents downgrading to them later, after a broken new update comes out (which also removes the previous update from the mirrors).
Most people don't downgrade or do it very rarely? It can be said that downgrading is an advanced operation, and you can set keepcache=1 if you need it
and then there are they cases where deps are solved but files conflicts, the transaction check fails like recently with two broken polkit updates
have fun typing "dnf --skip-broken upgrade" (yes i know NF lacks --skip-broken ATM) and download the other packages again for *zero reason*
That behaviour changed in dnf 1.0.2. So let's stop flogging this particular horse.
Zbyszek
----- Original Message -----
From: "Kevin Kofler" kevin.kofler@chello.at To: devel@lists.fedoraproject.org Sent: Friday, July 31, 2015 2:56:48 AM Subject: Re: gross DNF bandwidth inefficiency if filesystem space limited
Radek Holy wrote:
Known, https://bugzilla.redhat.com/show_bug.cgi?id=1220074. Should be fixed in dnf-1.0.2.
I still don't understand why we don't just enable keepcache by default. Even after a successful update/install, deleting the cached packages is a major data loss because it prevents downgrading to them later, after a broken new update comes out (which also removes the previous update from the mirrors).
Kevin Kofler
One can say that the mirrors should keep the older versions for this purpose but I don't want to start a flame war.
AFAIK, the "local" plugin is what you are looking for.
On 31 July 2015 at 17:27, Radek Holy rholy@redhat.com wrote:
One can say that the mirrors should keep the older versions
I would completely agree. As we can't rely that packages referenced in metadata just one day old still being on the mirrors means that PackageKit has to download hundreds of megabytes month more than it has to.
Richard.
On Fri, Jul 31, 2015 at 3:14 PM, Richard Hughes hughsient@gmail.com wrote:
On 31 July 2015 at 17:27, Radek Holy rholy@redhat.com wrote:
One can say that the mirrors should keep the older versions
I would completely agree. As we can't rely that packages referenced in metadata just one day old still being on the mirrors means that PackageKit has to download hundreds of megabytes month more than it has to.
Richard.
In the RHEL world, EPEL has bitten me really hard this way several times, especially when packages are discarded and no longer present in EPEL. So it's worth thinking about in general for RPM based systems.
On Sat, 1 Aug 2015 07:33:39 -0400 Nico Kadel-Garcia nkadel@gmail.com wrote:
On Fri, Jul 31, 2015 at 3:14 PM, Richard Hughes hughsient@gmail.com wrote:
On 31 July 2015 at 17:27, Radek Holy rholy@redhat.com wrote:
One can say that the mirrors should keep the older versions
I would completely agree. As we can't rely that packages referenced in metadata just one day old still being on the mirrors means that PackageKit has to download hundreds of megabytes month more than it has to.
Richard.
In the RHEL world, EPEL has bitten me really hard this way several times, especially when packages are discarded and no longer present in EPEL. So it's worth thinking about in general for RPM based systems.
So, here's the things to consider:
* Keeping 2 versions of every package will double mirror space. This may result in some mirrors dropping things or stopping bothering mirroring Fedora at all.
* repodata will likewise be 2x (or at least increased a great deal). Resulting in a bunch more downloading for everyone not just the folks who might want to downgrade sometimes.
* There could be some nasty issues with keeping known vulnerable/broken packages around. ie, foo-1.0 has a severe security bug, foo-1.1 fixes it. You now just need to trick someone into downgrading or directly installing foo-1.0 (which is in normal repos and signed and completely valid looking).
But it's not clear exactly what you 3 are proposing (or even if it's the same thing). :) So, perhaps you could clarify what exactly you want to do?
kevin
Kevin Fenzi wrote:
- There could be some nasty issues with keeping known vulnerable/broken packages around. ie, foo-1.0 has a severe security bug, foo-1.1 fixes it. You now just need to trick someone into downgrading or directly installing foo-1.0 (which is in normal repos and signed and completely valid looking).
But there are plenty of even older packages in the GA repository, also signed with the same key.
Kevin Kofler
On Mon, 03 Aug 2015 05:53:15 +0200 Kevin Kofler kevin.kofler@chello.at wrote:
Kevin Fenzi wrote:
- There could be some nasty issues with keeping known
vulnerable/broken packages around. ie, foo-1.0 has a severe security bug, foo-1.1 fixes it. You now just need to trick someone into downgrading or directly installing foo-1.0 (which is in normal repos and signed and completely valid looking).
But there are plenty of even older packages in the GA repository, also signed with the same key.
Sure, but this increases the exposure.
kevin
Dne 2.8.2015 v 18:15 Kevin Fenzi napsal(a):
On Sat, 1 Aug 2015 07:33:39 -0400 Nico Kadel-Garcia nkadel@gmail.com wrote:
On Fri, Jul 31, 2015 at 3:14 PM, Richard Hughes hughsient@gmail.com wrote:
On 31 July 2015 at 17:27, Radek Holy rholy@redhat.com wrote:
One can say that the mirrors should keep the older versions
I would completely agree. As we can't rely that packages referenced in metadata just one day old still being on the mirrors means that PackageKit has to download hundreds of megabytes month more than it has to.
Richard.
In the RHEL world, EPEL has bitten me really hard this way several times, especially when packages are discarded and no longer present in EPEL. So it's worth thinking about in general for RPM based systems.
- repodata will likewise be 2x (or at least increased a great deal). Resulting in a bunch more downloading for everyone not just the folks who might want to downgrade sometimes.
This is actually not true. The repodata should contain just the latest version, but if I have slightly older version of metadata already downloaded, I would be probably fine with installation of slightly older version of package.
Vít
On Mon, 3 Aug 2015 17:29:30 +0200 Vít Ondruch vondruch@redhat.com wrote:
This is actually not true.
Well, as I noted in my reply, I wasn't actually sure what was being proposed here.
The repodata should contain just the latest version, but if I have slightly older version of metadata already downloaded, I would be probably fine with installation of slightly older version of package.
So, you are proposing we do things exactly as we are now, but also keep around all previous copies of the packages in the repos (but not in the repodata)?
I'm not sure if that setup would work with dnf. I think it requires whatever mirror(s) it uses to match the metadata. If you have a older metadata and the mirror you hit has been updated, I think dnf will say that the repodata doesn't match and try another.
kevin
Kevin Fenzi (kevin@scrye.com) said:
So, you are proposing we do things exactly as we are now, but also keep around all previous copies of the packages in the repos (but not in the repodata)?
I'm not sure if that setup would work with dnf. I think it requires whatever mirror(s) it uses to match the metadata. If you have a older metadata and the mirror you hit has been updated, I think dnf will say that the repodata doesn't match and try another.
At some point, it might be worth doing cost/benefit analysis on continuing down our existing mirroring strategy and designing for the limits of that vs. the application of some sponsor funds towards the use of more standard CDN service and methodology.
Bill
On Mon, 3 Aug 2015 11:52:01 -0400 Bill Nottingham notting@splat.cc wrote:
At some point, it might be worth doing cost/benefit analysis on continuing down our existing mirroring strategy and designing for the limits of that vs. the application of some sponsor funds towards the use of more standard CDN service and methodology.
Yeah.
We looked at one of the existing CDNs a while back, but the way it was setup was not very friendly to the sort of content we have. They were more expecting slowly changing static content, where we have lots of changes all the time. Things may have changed or other vendors might be different.
In any case even if we had a CDN we controlled, it wouldn't change the fact that if you download repodata one day it may not be good the next. We could fix that I suppose by simply only pushing updates once a week or something, but I bet people wouldn't like that "solution". ;)
kevin
Dne 3.8.2015 v 17:45 Kevin Fenzi napsal(a):
On Mon, 3 Aug 2015 17:29:30 +0200 Vít Ondruch vondruch@redhat.com wrote:
This is actually not true.
Well, as I noted in my reply, I wasn't actually sure what was being proposed here.
The repodata should contain just the latest version, but if I have slightly older version of metadata already downloaded, I would be probably fine with installation of slightly older version of package.
So, you are proposing we do things exactly as we are now, but also keep around all previous copies of the packages in the repos (but not in the repodata)?
Keep the previous copies for some period of time, e.g. for one week.
I'm not sure if that setup would work with dnf. I think it requires whatever mirror(s) it uses to match the metadata. If you have a older metadata and the mirror you hit has been updated, I think dnf will say that the repodata doesn't match and try another.
I don't think so. At leas I interpret Richard's comment [1] differently. I think that Gnome Software (DNF) fetches the metadata once a day, then during that day, update is executed, based on the metadata got earlier. Unfortunately, during period between fetching of metadata and the actual update, the packages might not be available anymore, since the repository content changed.
Vít
[1] https://lists.fedoraproject.org/pipermail/devel/2015-July/212998.html