I'd like F11 to be more useful for disk power management. This involves tuning various parameters in order to reduce disk access. There are some tradeoffs involved, so I'd like feedback before pushing much of this.
The first is relatime. I've just pushed Ingo's smarter relatime code towards upstream again. In this configuration atime will only be updated if the current atime is either older than ctime or mtime, or if the current atime is more than a day in the past. The amount of time required before atime is updated will be a tunable, and a norelatime mount parameter will be available to mount filesystems without this behaviour. This shouldn't affect the behaviour of any applications.
The second is to increase the value of dirty_writeback_centisecs. This will result in dirty data spending more time in memory before being pushed out to disk. This is probably more controversial. The effect of this is that a power interruption will potentially result in more data being lost. It doesn't alter the behaviour of fsync(), so paranoid applications will still get to ensure that their data is on disk. Of course, it would also be helpful to stop applications generating dirty pages where possible. This would obviously be reverted if the system enters a critical power state.
Thirdly, I'd like to enable laptop mode by default. The effect of this is that any access that goes to disk will trigger an opportunistic flushing of dirty data shortly afterwards. To an extent this mitigates the change to dirty_writeback_centisecs, but there's obviously still some increased chance of data loss.
The combination of these features should result in (on average) fewer disk accesses and so (on average) should provide better performance. There's a chance that some usage patterns will fall foul of this and lose performance, so if we do this I'd like to do it sufficiently early in the cycle that we can get real-world feedback.
Any thoughts?
Matthew Garrett wrote:
I'd like F11 to be more useful for disk power management. This involves tuning various parameters in order to reduce disk access. There are some tradeoffs involved, so I'd like feedback before pushing much of this.
The first is relatime. I've just pushed Ingo's smarter relatime code towards upstream again. In this configuration atime will only be updated if the current atime is either older than ctime or mtime, or if the current atime is more than a day in the past. The amount of time required before atime is updated will be a tunable, and a norelatime mount parameter will be available to mount filesystems without this behaviour. This shouldn't affect the behaviour of any applications.
I could be convinced of this, I think, although there were a few nagging bugs w/ older Fedoras that seemed related to this change, and honestly never got to the bottom of them. But by and large Fedora already ran this way w/ few problems in the past.
The second is to increase the value of dirty_writeback_centisecs. This will result in dirty data spending more time in memory before being pushed out to disk. This is probably more controversial. The effect of this is that a power interruption will potentially result in more data being lost. It doesn't alter the behaviour of fsync(), so paranoid
s/paranoid/proper/ :)
applications will still get to ensure that their data is on disk. Of course, it would also be helpful to stop applications generating dirty pages where possible. This would obviously be reverted if the system enters a critical power state.
Thirdly, I'd like to enable laptop mode by default. The effect of this is that any access that goes to disk will trigger an opportunistic flushing of dirty data shortly afterwards. To an extent this mitigates the change to dirty_writeback_centisecs, but there's obviously still some increased chance of data loss.
I'll need to ponder these changes a bit more (and take another look at laptop mode, it's been a while).
The combination of these features should result in (on average) fewer disk accesses and so (on average) should provide better performance.
What are your plans to measure the results of these changes from power & performance perspectives? Also, tools to monitor what is causing disk accesses might be good (see also Bug 454582 - Tracker bug for over-eager apps that won't let disks spin down).
Do you also have any plans for changing default disk spin-down times, or would that be left to bios settings? And if so, we should probably monitor this for how it jives with the expected lifetime of a disk vs. lifetime rating for spindown cycles.
The original laptop mode kit included specific knowledge about some filesystem tuning parameters (commit times etc), is that part of your plan? Which filesystems will be recognized?
Thanks, -Eric
There's a chance that some usage patterns will fall foul of this and lose performance, so if we do this I'd like to do it sufficiently early in the cycle that we can get real-world feedback.
Any thoughts?
On Thu, Nov 27, 2008 at 09:59:08AM -0600, Eric Sandeen wrote:
What are your plans to measure the results of these changes from power & performance perspectives? Also, tools to monitor what is causing disk accesses might be good (see also Bug 454582 - Tracker bug for over-eager apps that won't let disks spin down).
Power-wise, I have measuring equipment here. Performance is obviously harder - I suspect synthetic benchmarks will get much the same performance as usual, so that might be down to waiting to see if users complain.
It would be nice to have the kernel export disk access via a socket or something rather than the currently available debug option, which is to dump to dmesg which then triggers log writes which triggers more messages and fail occurs. I had a handwavy patch to do that, but we probably want a good way of exposing that information to userspace.
Do you also have any plans for changing default disk spin-down times, or would that be left to bios settings? And if so, we should probably monitor this for how it jives with the expected lifetime of a disk vs. lifetime rating for spindown cycles.
Yes, the long-term plan involves allowing drive spindown. I'm hoping to do this adaptively to let us avoid the spinup/down tendancies a static timeout provides, but you're right that monitoring SMART information would be a pretty important part of that. I lean towards offering it on desktops and servers, but not enabled by default.
The original laptop mode kit included specific knowledge about some filesystem tuning parameters (commit times etc), is that part of your plan? Which filesystems will be recognized?
Mm. My recollection is that ext3 and xfs had easy to access tuning to help in this respect. Changing the kernel defaults would be one option there, or alternatively we could update fstab?
Matthew Garrett wrote:
On Thu, Nov 27, 2008 at 09:59:08AM -0600, Eric Sandeen wrote:
What are your plans to measure the results of these changes from power & performance perspectives? Also, tools to monitor what is causing disk accesses might be good (see also Bug 454582 - Tracker bug for over-eager apps that won't let disks spin down).
Power-wise, I have measuring equipment here. Performance is obviously harder - I suspect synthetic benchmarks will get much the same performance as usual, so that might be down to waiting to see if users complain.
It would be nice to have the kernel export disk access via a socket or something rather than the currently available debug option, which is to dump to dmesg which then triggers log writes which triggers more messages and fail occurs. I had a handwavy patch to do that, but we probably want a good way of exposing that information to userspace.
Yeah. Although you can tune things so that the block_dump stuff doesn't go to /var/log/messages, but I'd played tricks in the past with saving to ramdisks etc for this reason. :)
It'd also be nice if we could reliably query drives for their state, but in the past the query itself has spun up some of my drives. :)
Do you also have any plans for changing default disk spin-down times, or would that be left to bios settings? And if so, we should probably monitor this for how it jives with the expected lifetime of a disk vs. lifetime rating for spindown cycles.
Yes, the long-term plan involves allowing drive spindown. I'm hoping to do this adaptively to let us avoid the spinup/down tendancies a static timeout provides, but you're right that monitoring SMART information would be a pretty important part of that. I lean towards offering it on desktops and servers, but not enabled by default.
Sounds good. We don't want a "Fedora kills hard drives!" thread. :)
The original laptop mode kit included specific knowledge about some filesystem tuning parameters (commit times etc), is that part of your plan? Which filesystems will be recognized?
Mm. My recollection is that ext3 and xfs had easy to access tuning to help in this respect. Changing the kernel defaults would be one option there, or alternatively we could update fstab?
Yep, they do. xfs even has a bit of code specifically to work w/ laptop mode. Looks like the current laptop tools do handle ext3 & xfs from a cursory glance. Should probably make sure that ext4 is properly handled too.
-Eric
On Thu, 2008-11-27 at 16:44 +0000, Matthew Garrett wrote:
It would be nice to have the kernel export disk access via a socket or something rather than the currently available debug option, which is to dump to dmesg which then triggers log writes which triggers more messages and fail occurs.
Does blktrace trigger log writes?
On Fri, Nov 28, 2008 at 12:33:53PM +0000, David Woodhouse wrote:
On Thu, 2008-11-27 at 16:44 +0000, Matthew Garrett wrote:
It would be nice to have the kernel export disk access via a socket or something rather than the currently available debug option, which is to dump to dmesg which then triggers log writes which triggers more messages and fail occurs.
Does blktrace trigger log writes?
I was thinking of the vm_dump interface - I'd never noticed blktrace, which looks pretty much ideal. Now we just need a user-friendly front-end.
Matthew Garrett mjg@redhat.com writes:
On Fri, Nov 28, 2008 at 12:33:53PM +0000, David Woodhouse wrote:
On Thu, 2008-11-27 at 16:44 +0000, Matthew Garrett wrote:
It would be nice to have the kernel export disk access via a socket or something rather than the currently available debug option, which is to dump to dmesg which then triggers log writes which triggers more messages and fail occurs.
Does blktrace trigger log writes?
I was thinking of the vm_dump interface - I'd never noticed blktrace, which looks pretty much ideal. Now we just need a user-friendly front-end.
Define user-friendly. There is blkparse, which will format things for you.
Cheers, Jeff
David Woodhouse wrote:
On Thu, 2008-11-27 at 16:44 +0000, Matthew Garrett wrote:
It would be nice to have the kernel export disk access via a socket or something rather than the currently available debug option, which is to dump to dmesg which then triggers log writes which triggers more messages and fail occurs.
Does blktrace trigger log writes?
shouldn't... but then it doesn't tell you what file was written to, either (just blocks) because it's working lower down the stack. It gives you a process name though at least.
One of the things I've found fairly often is applications which re-write the same litle log or config file over and over, with unchanged contents. It'd be nice if any disktop-like thing could check for that. But that would require knowledge of what files were being accessed.
-Eric
Matthew Garrett wrote:
I'd like F11 to be more useful for disk power management. This involves tuning various parameters in order to reduce disk access.
The first is relatime. The second is to increase the value of dirty_writeback_centisecs. Thirdly, I'd like to enable laptop mode by default.
Does the kernel export whether dev is an SSD. If so, it would allow us to do a different combination of the above.
Generally userspace should do lots of things differently/more efficiently if it knows a dev is an SSD.
Pádraig.
On Thu, Nov 27, 2008 at 04:16:03PM +0000, Pádraig Brady wrote:
Does the kernel export whether dev is an SSD. If so, it would allow us to do a different combination of the above.
It can do, but I'd need to check if it does. You're right that the tradeoffs are different there, so I'd want to spend a while looking at that.
tor 2008-11-27 klockan 15:33 +0000 skrev Matthew Garrett:
I'd like F11 to be more useful for disk power management.
Cool!
The first is relatime. I've just pushed Ingo's smarter relatime code towards upstream again.
Go for it!
Thirdly, I'd like to enable laptop mode by default.
Are we talking of writing "5" into /proc/sys/vm/laptop_mode like for desktops, servers, and laptops on mains power?
IIRC gnome-power-manager automatically enables this when the system goes to run from battery, so I recon the idea is to make the laptop mode a generic power-conservative mode, then should that be reflected in the mainline kernel somehow by renaming said interface to ecosystem_mode or something.
Is this really good for datacenter machines like database servers and such, if we make it default?
Any thoughts?
Great initiative!
You're mainly planning for kernel-level features for the whole line of use cases and not at the level of userland tools like gnome-power-manager that we currently rely on quite extensively to manage power for desktops and laptops?
Linus
On Thu, Nov 27, 2008 at 09:36:17PM +0100, Linus Walleij wrote:
You're mainly planning for kernel-level features for the whole line of use cases and not at the level of userland tools like gnome-power-manager that we currently rely on quite extensively to manage power for desktops and laptops?
I'm looking at making sure that we have sane defaults from a power management perspective. Once that's done we can make sure that userland can reconfigure stuff sensibly in response to various events, but I'm hoping that for the most part we can find a solution that works without needing that.
On 27.11.2008 16:33, Matthew Garrett wrote:
[...] Of course, it would also be helpful to stop applications generating dirty pages where possible. This would obviously be reverted if the system enters a critical power state. [...] Any thoughts?
Is there a "disktop" we could use to fine those app that generating dirty pages without need? E.g. something as easy to use as powertop, just for disks? If not I'd tend to say it might make sense to create one, as finding and reporting the culprits that spin up the disks might help this whole effort a lot.
CU knurd
On Fri, 2008-11-28 at 06:49 +0100, Thorsten Leemhuis wrote:
On 27.11.2008 16:33, Matthew Garrett wrote:
[...] Of course, it would also be helpful to stop applications generating dirty pages where possible. This would obviously be reverted if the system enters a critical power state. [...] Any thoughts?
Is there a "disktop" we could use to fine those app that generating dirty pages without need? E.g. something as easy to use as powertop, just for disks? If not I'd tend to say it might make sense to create one, as finding and reporting the culprits that spin up the disks might help this whole effort a lot.
There is something called iotop.
On 28.11.2008 06:54, Matthias Clasen wrote:
On Fri, 2008-11-28 at 06:49 +0100, Thorsten Leemhuis wrote:
On 27.11.2008 16:33, Matthew Garrett wrote:
[...] Of course, it would also be helpful to stop applications generating dirty pages where possible. This would obviously be reverted if the system enters a critical power state. [...] Any thoughts?
Is there a "disktop" we could use to fine those app that generating dirty pages without need? E.g. something as easy to use as powertop, just for disks? If not I'd tend to say it might make sense to create one, as finding and reporting the culprits that spin up the disks might help this whole effort a lot.
There is something called iotop.
I wouldn't call iotop as easy to use at powertop. And the latter afaics was such a success mainly because it was so easy to use.
CU knurd
Ralf Ertzinger wrote:
Hi.
On Fri, 28 Nov 2008 07:01:37 +0100, Thorsten Leemhuis wrote:
I wouldn't call iotop as easy to use at powertop. And the latter afaics was such a success mainly because it was so easy to use.
And because it had those nice 'press X to turn off Y' hints.
I've been looking at the existing system tap scripts lately for that and most of them weren't aimed at that from what i could tell. I'm probably going to rework one of them to basically do what powertop does for disk io as i'm less interested in kb/sec reads and writes but more in read/write operations per minute per process.
Regards, Phil
On Fri, Nov 28, 2008 at 06:49:55AM +0100, Thorsten Leemhuis wrote:
Is there a "disktop" we could use to fine those app that generating dirty pages without need? E.g. something as easy to use as powertop, just for disks? If not I'd tend to say it might make sense to create one, as finding and reporting the culprits that spin up the disks might help this whole effort a lot.
blktrace produces the information, but I wouldn't currently call it "friendly".
Matthew Garrett wrote:
I'd like F11 to be more useful for disk power management. This involves tuning various parameters in order to reduce disk access. There are some tradeoffs involved, so I'd like feedback before pushing much of this.
Great idea, i'm working quite a bit in that direction at them moment as well.
The first is relatime. I've just pushed Ingo's smarter relatime code towards upstream again. In this configuration atime will only be updated if the current atime is either older than ctime or mtime, or if the current atime is more than a day in the past. The amount of time required before atime is updated will be a tunable, and a norelatime mount parameter will be available to mount filesystems without this behaviour. This shouldn't affect the behaviour of any applications.
+1000. I've done a few tests here with my systems, and although the really bad offenders related to disk io are of course processes that do write access there are quite a few cases where read access happens which then has to update the atime and result in a write to the disc. :/
The second is to increase the value of dirty_writeback_centisecs. This will result in dirty data spending more time in memory before being pushed out to disk. This is probably more controversial. The effect of this is that a power interruption will potentially result in more data being lost. It doesn't alter the behaviour of fsync(), so paranoid applications will still get to ensure that their data is on disk. Of course, it would also be helpful to stop applications generating dirty pages where possible. This would obviously be reverted if the system enters a critical power state.
Sounds like a good idea. It's also something i've been looking at a bit. Take e.g. something like xchat. If you enable logging there you basically keep your disc active all the time as xchat itself doesn't use a large internal cache to write the data out every X MB or so.
Thirdly, I'd like to enable laptop mode by default. The effect of this is that any access that goes to disk will trigger an opportunistic flushing of dirty data shortly afterwards. To an extent this mitigates the change to dirty_writeback_centisecs, but there's obviously still some increased chance of data loss.
Agree as well, though i'm not sure if we should make it enabled by default as the risk of data loss in case of a system failure is quite high then.
The combination of these features should result in (on average) fewer disk accesses and so (on average) should provide better performance. There's a chance that some usage patterns will fall foul of this and lose performance, so if we do this I'd like to do it sufficiently early in the cycle that we can get real-world feedback.
Sounds like a great idea and something i've been working on myself lately, too. I even went a bit further and thought about the idea if a combination of a monitoring backend and a tuning engine could provide an automatic adoption of the system to the current use. E.g. during daytime when a user works with his machine we would typically see quite a few reads and write all the time. Drive spindowns or other power saving features could during that time be reduced so that the user will have the best performance. During the night (in case he didn't turn of the machine) only very few read and even fewer write operations should happen, at which time the disk could then be powered down most of the time. And of course this can be extended to not only disk drivces but other tunable hardware components.
Reagards, Phil
Hi.
On Fri, 28 Nov 2008 13:16:36 +0100, Phil Knirsch wrote:
Sounds like a good idea. It's also something i've been looking at a bit. Take e.g. something like xchat. If you enable logging there you basically keep your disc active all the time as xchat itself doesn't use a large internal cache to write the data out every X MB or so.
And why should it, honestly? Bufffering data ist the OSes job 99% of the time. As long as xchat does not use fsync() after each write we should be good.
Ralf Ertzinger wrote:
Hi.
On Fri, 28 Nov 2008 13:16:36 +0100, Phil Knirsch wrote:
Sounds like a good idea. It's also something i've been looking at a bit. Take e.g. something like xchat. If you enable logging there you basically keep your disc active all the time as xchat itself doesn't use a large internal cache to write the data out every X MB or so.
And why should it, honestly? Bufffering data ist the OSes job 99% of the time. As long as xchat does not use fsync() after each write we should be good.
Maybe i wasn't clear enough. But take for example the difference between xchat and say, syslog. I'd be really unhappy if i'd loose 1 hour of syslog data in the event of a system crash, but i couldn't care less if i'd loose 1 hour of xchatlogs during that time. So it is in that case application specific in a way, and the kernel can't (and shouldn't) semantically know how important your data is that you write with it. And currently you can either do a write() and let the data be flushed to disk automatically by the kernel every dirty_writeback_centisecs or directly use a flush() after your write to make sure data is written immediately. So the only way an application can currently "delay" those writes is by using internal buffers that fill up and get written once they are full. And the difference between 1 write every minute due to dirty_writeback_centisecs and 1 write every hour because the buffer takes that long to get filled is quite large imo.
Regards, Phil
i.
On Fri, 28 Nov 2008 14:55:43 +0100, Phil Knirsch wrote:
Maybe i wasn't clear enough. But take for example the difference between xchat and say, syslog. I'd be really unhappy if i'd loose 1 hour of syslog data in the event of a system crash, but i couldn't care less if i'd loose 1 hour of xchatlogs during that time. So it is in that case application specific in a way, and the kernel can't (and shouldn't) semantically know how important your data is that you write with it.
Well, it does, in a way. If you absolutely want to have your data on the disk you have to call flush(). If you do not you're at the mercy of, well, whatever governs data storage these days. But that has always been the case.
If the kernel default is to flush un-flush()-ed data only every hour then, well, then that's that. Nobody ever guaranteed writes every few seconds.
Retrofitting buffering into every application (or into glibc) does not strike me as an elegant solution. Besides, it wastes memory.
On Fri, 2008-11-28 at 14:55 +0100, Phil Knirsch wrote:
Ralf Ertzinger wrote:
Hi.
On Fri, 28 Nov 2008 13:16:36 +0100, Phil Knirsch wrote:
Sounds like a good idea. It's also something i've been looking at a bit. Take e.g. something like xchat. If you enable logging there you basically keep your disc active all the time as xchat itself doesn't use a large internal cache to write the data out every X MB or so.
And why should it, honestly? Bufffering data ist the OSes job 99% of the time. As long as xchat does not use fsync() after each write we should be good.
Maybe i wasn't clear enough. But take for example the difference between xchat and say, syslog. I'd be really unhappy if i'd loose 1 hour of syslog data in the event of a system crash, but i couldn't care less if i'd loose 1 hour of xchatlogs during that time. So it is in that case application specific in a way, and the kernel can't (and shouldn't) semantically know how important your data is that you write with it. And currently you can either do a write() and let the data be flushed to disk automatically by the kernel every dirty_writeback_centisecs or directly use a flush() after your write to make sure data is written immediately. So the only way an application can currently "delay" those writes is by using internal buffers that fill up and get written once they are full. And the difference between 1 write every minute due to dirty_writeback_centisecs and 1 write every hour because the buffer takes that long to get filled is quite large imo.
Oh, but the kernel knows, the write() semantics clearly state that if you want to make sure data is on disk you call flush(). If you don't call flush() the kernel can delay flushing to disk indefinitely. So the kernel have a very well defined way to know what apps want.
Simo.
On 11/28/2008 06:55 AM, Phil Knirsch wrote:
Ralf Ertzinger wrote:
Hi.
On Fri, 28 Nov 2008 13:16:36 +0100, Phil Knirsch wrote:
Sounds like a good idea. It's also something i've been looking at a bit. Take e.g. something like xchat. If you enable logging there you basically keep your disc active all the time as xchat itself doesn't use a large internal cache to write the data out every X MB or so.
And why should it, honestly? Bufffering data ist the OSes job 99% of the time. As long as xchat does not use fsync() after each write we should be good.
Maybe i wasn't clear enough. But take for example the difference between xchat and say, syslog. I'd be really unhappy if i'd loose 1 hour of syslog data in the event of a system crash, but i couldn't care less if i'd loose 1 hour of xchatlogs during that time.
Depends on the point of view -- take Joe The User who carries "Important Conversation" of which he wants to have a log. He cares a lot that he lost last hour of his xchat (or whatever he uses) logs. He quite likely doesn't care about last hour of syslog messages (he may not even be aware they exist in the first place)...
Regards, Dariusz
___________________________________________________________ Inbox full of spam? Get leading spam protection and 1GB storage with All New Yahoo! Mail. http://uk.docs.yahoo.com/nowyoucan.html
On Fri, Nov 28, 2008 at 01:16:36PM +0100, Phil Knirsch wrote:
Sounds like a great idea and something i've been working on myself lately, too. I even went a bit further and thought about the idea if a combination of a monitoring backend and a tuning engine could provide an automatic adoption of the system to the current use. E.g. during daytime when a user works with his machine we would typically see quite a few reads and write all the time. Drive spindowns or other power saving features could during that time be reduced so that the user will have the best performance. During the night (in case he didn't turn of the machine) only very few read and even fewer write operations should happen, at which time the disk could then be powered down most of the time. And of course this can be extended to not only disk drivces but other tunable hardware components.
Indeed. There's been a fair amount of research into this - see http://www.soe.ucsc.edu/~tbisson/papers/bisson-fast04.pdf for instance. Some laptop drives will behave this way if APM settings are set appropriately, but a decent implementation of this in userspace would be very nice.
Matthew Garrett (mjg@redhat.com) said:
The first is relatime. I've just pushed Ingo's smarter relatime code towards upstream again. In this configuration atime will only be updated if the current atime is either older than ctime or mtime, or if the current atime is more than a day in the past. The amount of time required before atime is updated will be a tunable, and a norelatime mount parameter will be available to mount filesystems without this behaviour. This shouldn't affect the behaviour of any applications.
Works for me.
The second is to increase the value of dirty_writeback_centisecs. This will result in dirty data spending more time in memory before being pushed out to disk. This is probably more controversial. The effect of this is that a power interruption will potentially result in more data being lost. It doesn't alter the behaviour of fsync(), so paranoid applications will still get to ensure that their data is on disk. Of course, it would also be helpful to stop applications generating dirty pages where possible. This would obviously be reverted if the system enters a critical power state.
Thirdly, I'd like to enable laptop mode by default. The effect of this is that any access that goes to disk will trigger an opportunistic flushing of dirty data shortly afterwards. To an extent this mitigates the change to dirty_writeback_centisecs, but there's obviously still some increased chance of data loss.
I'd be curious how this affects various workloads if we're changing the global defaults. Were you planning on flipping the kernel defaults, or just setting a default in sysctl.conf? (It occurs to me that laptop_mode is horribly named.)
Bill