Hi all -
Just my semiannual plea for some ext4 testing in the Fedora beta cycle.
ext4 was a feature goal for F9, and it was close; it was relegated to a secret-anaconda-handshake "iamanext4developer" to enable it for install - largely due to lack of ext4 support in e2fsprogs.
With the e2fsprogs-1.41.0 release in F10, we now have an ext4-capable e2fsprogs, with working fsck, debugfs, etc as well as mkfs.ext4 and mkfs.ext4dev to enable the new disk format features by default.
For F10, the barrier to entry has been lowered by 14 characters - now all you have to type on the boot prompt is "ext4" :) and when you go to the custom partitioning screen, you'll get the option to create ext4 filesystems at install time.
I'd appreciate any and all testing, benchmarking & feedback that people would be willing to do. Just getting more exposure in real-life scenarios would be great.
As with any filesystem, I wouldn't put your only copy of your most precious data on it - use good sense about backups etc - but ext4 has made good progress since F9 on both stability and performance, so have at it!
Thanks,
-Eric
Just my semiannual plea for some ext4 testing in the Fedora beta cycle.
ext4 was a feature goal for F9, and it was close; it was relegated to a secret-anaconda-handshake "iamanext4developer" to enable it for install
- largely due to lack of ext4 support in e2fsprogs.
With the e2fsprogs-1.41.0 release in F10, we now have an ext4-capable e2fsprogs, with working fsck, debugfs, etc as well as mkfs.ext4 and mkfs.ext4dev to enable the new disk format features by default.
For F10, the barrier to entry has been lowered by 14 characters - now all you have to type on the boot prompt is "ext4" :) and when you go to the custom partitioning screen, you'll get the option to create ext4 filesystems at install time.
I'd appreciate any and all testing, benchmarking & feedback that people would be willing to do. Just getting more exposure in real-life scenarios would be great.
As with any filesystem, I wouldn't put your only copy of your most precious data on it - use good sense about backups etc - but ext4 has made good progress since F9 on both stability and performance, so have at it!
Do you have recommended FS creation parameters for SSDs?
Peter
Peter Robinson wrote:
Do you have recommended FS creation parameters for SSDs?
Not really; there has unfortunately been very little (or no) optimization of ext4 for SSDs at this point ...
It'd probably be an allocator heuristic change but nobody's looked into that yet.
Once we get ext4 raid-geometry-aware, we can probably use some of that geometry info to better match up with the erase block sizes on an SSD at least...
-Eric
Peter
Eric Sandeen wrote:
Peter Robinson wrote:
Do you have recommended FS creation parameters for SSDs?
Not really; there has unfortunately been very little (or no) optimization of ext4 for SSDs at this point ...
It'd probably be an allocator heuristic change but nobody's looked into that yet.
Once we get ext4 raid-geometry-aware, we can probably use some of that geometry info to better match up with the erase block sizes on an SSD at least...
In my testing (without a filesystem), raid-optimized access works quite well on SSDs, so that should carry over quite effortlessly. The places where we need work are:
a) Make jbd do fewer small writes.
b) Write a partitioning tool that doesn't suck the way fdisk and parted do, so we can partition properly for the geometry of modern storage.
c) Put an alignment option in the installer, so people can optimize their partitions and filesystems for SSDs, RAID, and anything else where alignment matters.
-- Chris
Chris Snook wrote:
Eric Sandeen wrote:
Peter Robinson wrote:
Do you have recommended FS creation parameters for SSDs?
Not really; there has unfortunately been very little (or no) optimization of ext4 for SSDs at this point ...
It'd probably be an allocator heuristic change but nobody's looked into that yet.
Once we get ext4 raid-geometry-aware, we can probably use some of that geometry info to better match up with the erase block sizes on an SSD at least...
In my testing (without a filesystem), raid-optimized access works quite well on SSDs, so that should carry over quite effortlessly. The places where we need work are:
a) Make jbd do fewer small writes.
b) Write a partitioning tool that doesn't suck the way fdisk and parted do, so we can partition properly for the geometry of modern storage.
parted and fdisk can do 512-byte sector granularity; what do you need here?
c) Put an alignment option in the installer, so people can optimize their partitions and filesystems for SSDs, RAID, and anything else where alignment matters.
mkfs should be doing this, not the installer. mkfs.xfs does; mkfs.ext$FOO should too.
-Eric
-- Chris
Eric Sandeen wrote:
Chris Snook wrote:
Eric Sandeen wrote:
Peter Robinson wrote:
Do you have recommended FS creation parameters for SSDs?
Not really; there has unfortunately been very little (or no) optimization of ext4 for SSDs at this point ...
It'd probably be an allocator heuristic change but nobody's looked into that yet.
Once we get ext4 raid-geometry-aware, we can probably use some of that geometry info to better match up with the erase block sizes on an SSD at least...
In my testing (without a filesystem), raid-optimized access works quite well on SSDs, so that should carry over quite effortlessly. The places where we need work are:
a) Make jbd do fewer small writes.
b) Write a partitioning tool that doesn't suck the way fdisk and parted do, so we can partition properly for the geometry of modern storage.
parted and fdisk can do 512-byte sector granularity; what do you need here?
c) Put an alignment option in the installer, so people can optimize their partitions and filesystems for SSDs, RAID, and anything else where alignment matters.
mkfs should be doing this, not the installer. mkfs.xfs does; mkfs.ext$FOO should too.
-Eric
-- Chris
Eric Sandeen wrote:
Chris Snook wrote:
c) Put an alignment option in the installer, so people can optimize their partitions and filesystems for SSDs, RAID, and anything else where alignment matters.
mkfs should be doing this, not the installer. mkfs.xfs does; mkfs.ext$FOO should too.
-Eric
I should qualify that; "can" if it's on storage that can be queried (md, lvm, etc - but not some random hardware scsi lun ... for that case perhaps installer tweaks maybe, but how often do you really install onto these things at anaconda-time?
-Eric
On Wed, 2008-09-03 at 20:06 -0500, Eric Sandeen wrote:
Eric Sandeen wrote:
Chris Snook wrote:
c) Put an alignment option in the installer, so people can optimize their partitions and filesystems for SSDs, RAID, and anything else where alignment matters.
mkfs should be doing this, not the installer. mkfs.xfs does; mkfs.ext$FOO should too.
I should qualify that; "can" if it's on storage that can be queried (md, lvm, etc - but not some random hardware scsi lun ... for that case perhaps installer tweaks maybe, but how often do you really install onto these things at anaconda-time?
mkfs should be able to query scsi info about a block device just as well as anaconda... Arguably it could be done in anaconda first just to show that it works nicely (as python is easier to hack on ;-)
But it definitely should be "hardware as seen as blah, do the right thing" as opposed to "make the user figure it out".
Also, there are some additional complications once you start thinking about things like the live install where we just dd over the filesystem and then resize rather than doing a whole new one.
Jeremy
Jeremy Katz wrote:
On Wed, 2008-09-03 at 20:06 -0500, Eric Sandeen wrote:
Eric Sandeen wrote:
Chris Snook wrote:
c) Put an alignment option in the installer, so people can optimize their partitions and filesystems for SSDs, RAID, and anything else where alignment matters.
mkfs should be doing this, not the installer. mkfs.xfs does; mkfs.ext$FOO should too.
I should qualify that; "can" if it's on storage that can be queried (md, lvm, etc - but not some random hardware scsi lun ... for that case perhaps installer tweaks maybe, but how often do you really install onto these things at anaconda-time?
mkfs should be able to query scsi info about a block device just as well as anaconda... Arguably it could be done in anaconda first just to show that it works nicely (as python is easier to hack on ;-)
Well, I meant in some cases where you simply cannot query the device, then *if* you wish to make the fs via anaconda, you'd need some knobs to twiddle manually...
But it definitely should be "hardware as seen as blah, do the right thing" as opposed to "make the user figure it out".
Also, there are some additional complications once you start thinking about things like the live install where we just dd over the filesystem and then resize rather than doing a whole new one.
in that case you really are probably not too worried about fs geometry and performance, I think.
-Eric
On Wed, Sep 03, 2008 at 11:19:04AM -0500, Eric Sandeen wrote:
I'd appreciate any and all testing, benchmarking & feedback that people would be willing to do. Just getting more exposure in real-life scenarios would be great.
As with any filesystem, I wouldn't put your only copy of your most precious data on it - use good sense about backups etc - but ext4 has made good progress since F9 on both stability and performance, so have at it!
Persistent pre-allocation[1] is something that virt-manager could really use when it has to allocate multi-gigabyte images. A few questions about this feature though:
(a) Is it exposed as a syscall anywhere? I don't see it in the header files of my Rawhide system (2.6.27).
(b) Will preallocate "do the right thing" on filesystems that don't directly support it?
(c) Does ext4 preallocate in the background? A synchronous preallocate call isn't much use to virt-manager.
Rich.
[1] http://en.wikipedia.org/wiki/Ext4#Persistent_pre-allocation
Richard W.M. Jones wrote:
On Wed, Sep 03, 2008 at 11:19:04AM -0500, Eric Sandeen wrote:
I'd appreciate any and all testing, benchmarking & feedback that people would be willing to do. Just getting more exposure in real-life scenarios would be great.
As with any filesystem, I wouldn't put your only copy of your most precious data on it - use good sense about backups etc - but ext4 has made good progress since F9 on both stability and performance, so have at it!
Persistent pre-allocation[1] is something that virt-manager could really use when it has to allocate multi-gigabyte images. A few questions about this feature though:
(a) Is it exposed as a syscall anywhere? I don't see it in the header files of my Rawhide system (2.6.27).
Hm, I probably need to get the fallocate.h header file included if it's not so that sys_fallocate can be used directly, but it is also exposed via posix_fallocate in glibc - tested here on xfs just because xfs_bmap is a handy way to show that it actually works via glibc:
[root@inode fallocate]# ./test_posix_fallocate testfile 0 16384 [root@inode fallocate]# xfs_bmap -vv testfile testfile: EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL FLAGS 0: [0..31]: 138928..138959 0 (138928..138959) 32 10000 FLAG Values: 010000 Unwritten preallocated extent 001000 Doesn't begin on stripe unit 000100 Doesn't end on stripe unit 000010 Doesn't begin on stripe width 000001 Doesn't end on stripe width
the ->fallocate op is hooked up for ext4, xfs, and ocfs2 at this time.
(b) Will preallocate "do the right thing" on filesystems that don't directly support it?
calling sys_fallocate() will give you -EOPNOTSUPP; using posix_fallocate() falls back to writing zeros IIRC.
(c) Does ext4 preallocate in the background? A synchronous preallocate call isn't much use to virt-manager.
It does not, but what is the concern? It doesn't take much time:
(on ext4 this time):
[root@inode test]# time test_posix_fallocate testfile 0 10737418240
real 0m0.009s user 0m0.000s sys 0m0.009s [root@inode test]# ls -lh testfile -rw-r--r-- 1 root root 10G 2008-09-03 12:30 testfile [root@inode test]# du -hc testfile 11G testfile 11G total
-Eric
Rich.
[1] http://en.wikipedia.org/wiki/Ext4#Persistent_pre-allocation
#define _LARGEFILE64_SOURCE #define _GNU_SOURCE
#include <fcntl.h> #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <sys/types.h>
int main(int argc, char **argv) { int fd; int ret; loff_t offset; loff_t len; char *fname;
fname = argv[1]; offset = atoll(argv[2]); len = atoll(argv[3]);
printf("file %s offset %llu (%s) length %llu (%s)\n", fname, offset, argv[2], len, argv[3]); fd = open(fname, O_CREAT|O_RDWR, 0666); if (fd < 0) { perror("Error opening file"); return 1; }
ret = posix_fallocate64(fd, offset, len); if (ret < 0) perror("Error allocating space");
close(fd); return 0; }
Eric Sandeen wrote:
Richard W.M. Jones wrote:
On Wed, Sep 03, 2008 at 11:19:04AM -0500, Eric Sandeen wrote:
I'd appreciate any and all testing, benchmarking & feedback that people would be willing to do. Just getting more exposure in real-life scenarios would be great.
As with any filesystem, I wouldn't put your only copy of your most precious data on it - use good sense about backups etc - but ext4 has made good progress since F9 on both stability and performance, so have at it!
Persistent pre-allocation[1] is something that virt-manager could really use when it has to allocate multi-gigabyte images. A few questions about this feature though:
(a) Is it exposed as a syscall anywhere? I don't see it in the header files of my Rawhide system (2.6.27).
Hm, I probably need to get the fallocate.h header file included if it's not so that sys_fallocate can be used directly,
Ah, it should be there:
[root@inode ~]# rpm -ql kernel-headers | grep falloc /usr/include/linux/falloc.h
-Eric
On Wed, Sep 03, 2008 at 12:36:55PM -0500, Eric Sandeen wrote:
Hm, I probably need to get the fallocate.h header file included if it's not so that sys_fallocate can be used directly, but it is also exposed via posix_fallocate in glibc - tested here on xfs just because xfs_bmap is a handy way to show that it actually works via glibc:
Uh, stupid me - I was looking for the wrong call. Rawhide _does_ have it.
(c) Does ext4 preallocate in the background? A synchronous preallocate call isn't much use to virt-manager.
It does not, but what is the concern? It doesn't take much time:
(on ext4 this time):
[root@inode test]# time test_posix_fallocate testfile 0 10737418240
real 0m0.009s user 0m0.000s sys 0m0.009s
OK ... I'm assuming though that the zeroes aren't all written to disk in this time, so that is exactly what I wanted.
Rich.
Richard W.M. Jones wrote:
On Wed, Sep 03, 2008 at 12:36:55PM -0500, Eric Sandeen wrote:
Hm, I probably need to get the fallocate.h header file included if it's not so that sys_fallocate can be used directly, but it is also exposed via posix_fallocate in glibc - tested here on xfs just because xfs_bmap is a handy way to show that it actually works via glibc:
Uh, stupid me - I was looking for the wrong call. Rawhide _does_ have it.
(c) Does ext4 preallocate in the background? A synchronous preallocate call isn't much use to virt-manager.
It does not, but what is the concern? It doesn't take much time:
(on ext4 this time):
[root@inode test]# time test_posix_fallocate testfile 0 10737418240
real 0m0.009s user 0m0.000s sys 0m0.009s
OK ... I'm assuming though that the zeroes aren't all written to disk in this time, so that is exactly what I wanted.
No, zeros are never written to disk if the ->fallocate call is supported. They are allocated, but flagged on-disk as unwritten/uninitialized, so any read (before a write, of course) will return zeros without the need for all that pesky writing business....
-Eric