Hello,
some of you may have read some wiki pages about the plans for the new init system [1]. As a first step in this direction [2], I packaged prcsys from Mandriva, patched initscripts with a very small patch, and uploaded the src.rpm to [3]. To enable parallel booting just build and install both packages and edit /etc/sysconfig/init. Set PARALLEL_STARTUP=yes and there we go.
The next step would be to modify all initscripts in /etc/init.d to be LSB compliant [4]. This will speed up booting, because they can and will be started in parallel. You should file bugzillas against the component, to which this initscript belongs, with a patch (this has to be done anyway to be LSB compliant in regards to initscripts over time). Especially the exit codes need to be fixed (which will make status queries a lot easier and more robust).
Early login [5] is also a next step towards a "fast boot" user experience.
Alternatives to SysVInit (like upstart/initng) can live in Fedora as well, but we are very conservative in changing the startup mechanism that proved to function for a long time now. Unless the "real" killer feature is absolutly needed, we would like to keep backwards compatibility as long as possible.
I hope many of you try and test [3] and write patches to improve our service initscripts to be LSB compliant :-) Parallel booting is the reward.
Happy testing, Harald
[1] http://fedoraproject.org/wiki/FCNewInit [2] http://fedoraproject.org/wiki/FCNewInit/RC [3] http://people.redhat.com/harald/downloads/initscripts/parallel/ [4] http://fedoraproject.org/wiki/FCNewInit/Initscripts [5] http://fedoraproject.org/wiki/FCNewInit/xdm
On Fri, Jun 22, 2007 at 05:18:54PM +0200, Harald Hoyer wrote:
Hello,
some of you may have read some wiki pages about the plans for the new init system [1]. As a first step in this direction [2], I packaged prcsys from Mandriva, patched initscripts with a very small patch, and uploaded the src.rpm to [3]. To enable parallel booting just build and install both packages and edit /etc/sysconfig/init. Set PARALLEL_STARTUP=yes and there we go.
The next step would be to modify all initscripts in /etc/init.d to be LSB compliant [4]. This will speed up booting, because they can and will be started in parallel. You should file bugzillas against the component, to which this initscript belongs, with a patch (this has to be done anyway to be LSB compliant in regards to initscripts over time). Especially the exit codes need to be fixed (which will make status queries a lot easier and more robust).
Early login [5] is also a next step towards a "fast boot" user experience.
Alternatives to SysVInit (like upstart/initng) can live in Fedora as well, but we are very conservative in changing the startup mechanism that proved to function for a long time now. Unless the "real" killer feature is absolutly needed, we would like to keep backwards compatibility as long as possible.
I hope many of you try and test [3] and write patches to improve our service initscripts to be LSB compliant :-) Parallel booting is the reward.
Happy testing, Harald
[1] http://fedoraproject.org/wiki/FCNewInit [2] http://fedoraproject.org/wiki/FCNewInit/RC [3] http://people.redhat.com/harald/downloads/initscripts/parallel/ [4] http://fedoraproject.org/wiki/FCNewInit/Initscripts [5] http://fedoraproject.org/wiki/FCNewInit/xdm
On Friday 22 June 2007, Harald Hoyer wrote:
The next step would be to modify all initscripts in /etc/init.d to be LSB compliant [4].
Does this imply that these modified init scripts should also be installed/removed with the LSB tools instead of chkconfig? See https://bugzilla.redhat.com/245494
Ville Skyttä wrote :
On Friday 22 June 2007, Harald Hoyer wrote:
The next step would be to modify all initscripts in /etc/init.d to be LSB compliant [4].
Does this imply that these modified init scripts should also be installed/removed with the LSB tools instead of chkconfig? See https://bugzilla.redhat.com/245494
Ouch! As much as the new approach seems interesting, I always remove the redhat-lsb package from my minimal installs because of all the useless (in my case) X related packages it pulls in :-/
BTW, is this just some initial test? Or will Fedora officially move in this direction, with the change discussed and approved by FESCO?
Matthias
Ville Skyttä (ville.skytta@iki.fi) said:
On Friday 22 June 2007, Harald Hoyer wrote:
The next step would be to modify all initscripts in /etc/init.d to be LSB compliant [4].
Does this imply that these modified init scripts should also be installed/removed with the LSB tools instead of chkconfig?
No. You can be LSB compliant in exit status, dependencies, etc. without using the LSB wrappers (in fact, it's greatly preferred to *NOT* use the LSB wrappers.)
My questions about this approach are:
- where are the benchmarks? What's the actual gain? - how would this be useful for the case where facilities that are provided are determined at runtime (say, NetworkManager providing $network instead of /etc/init.d/network, or $remote_fs being provided by either rc.sysinit or /etc/init.d/netfs, depending on configuration). Similarly, you may want a meta-dependency for 'authorization available', which would be at different times depending on whether or not you're using local passwords, KRB5, etc. - does this work with dbus system activation?
Bill
Bill Nottingham (notting@redhat.com) said:
- where are the benchmarks? What's the actual gain?
Not seeing any other benchmarks, I decided to test this.
Fairly standard box - P4, ata_piix, 1G memory. Stock desktop install, fully up to date with updates and updates-testing as of this afternoon.
A 'normal' boot to gdm is about 56.9 seconds. I installed prcsys, and edited the startup scripts to add LSB dependencies as attached.
I then booted with prcsys and parallel init. The new boot time was... 56.3 and 56.6 seconds.
So, for all this work, we get a 0.6%-1.1% speedup. Oh, and we get 62 AVCs from SELinux in the process. What's the point of this again?
- how would this be useful for the case where facilities that are provided are determined at runtime (say, NetworkManager providing $network instead of /etc/init.d/network, or $remote_fs being provided by either rc.sysinit or /etc/init.d/netfs, depending on configuration). Similarly, you may want a meta-dependency for 'authorization available', which would be at different times depending on whether or not you're using local passwords, KRB5, etc.
- does this work with dbus system activation?
I also don't see how it handles either of these.
Bill
Bill Nottingham (notting@redhat.com) said:
Bill Nottingham (notting@redhat.com) said:
- where are the benchmarks? What's the actual gain?
Not seeing any other benchmarks, I decided to test this.
Fairly standard box - P4, ata_piix, 1G memory. Stock desktop install, fully up to date with updates and updates-testing as of this afternoon.
A 'normal' boot to gdm is about 56.9 seconds. I installed prcsys, and edited the startup scripts to add LSB dependencies as attached.
Oops, changes attached.
Bill
Bill Nottingham <notting <at> redhat.com> writes:
So, for all this work, we get a 0.6%-1.1% speedup. Oh, and we get 62 AVCs from SELinux in the process. What's the point of this again?
Yeah, not very useful it would seem...
What happened to this?
http://fedoraproject.org/wiki/FCNewInit/xdm
Is it being worked on? (Yeah, I know - I should probably not ask questions, but submit patches :-)
From a desktop users' point of view, the "magic" thing is the login screen. By
the time they type in/pick the username and password, other stuff would have already started in the background, but the "boot time" would appear shorter. Isn't that the trick used on Windaz boxes as well?
-- Bojan
Speaking on the subject of parallel booting... has anyone tried init-ng? I had it installed in Fedora Core 6 using a P4 Hyper-Threading CPU and it cut my boot time in half. One drawback is that you manually have to set up most of your startup scripts as many services are not setup right out of the box. Maybe this changed in Fedora 7?
On 7/3/07, Bojan Smojver bojan@rexursive.com wrote:
Bill Nottingham <notting <at> redhat.com> writes:
So, for all this work, we get a 0.6%-1.1% speedup. Oh, and we get 62 AVCs from SELinux in the process. What's the point of this again?
Yeah, not very useful it would seem...
What happened to this?
http://fedoraproject.org/wiki/FCNewInit/xdm
Is it being worked on? (Yeah, I know - I should probably not ask questions, but submit patches :-)
From a desktop users' point of view, the "magic" thing is the login
screen. By the time they type in/pick the username and password, other stuff would have already started in the background, but the "boot time" would appear shorter. Isn't that the trick used on Windaz boxes as well?
-- Bojan
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
On Wed, 2007-07-04 at 02:32 +0000, Bojan Smojver wrote:
Isn't that the trick used on Windaz boxes as well?
Possibly, but I think the biggest speedup by far is the disk caching/reorganization that both Windows and OS X do: http://www.kernelthread.com/mac/apme/optimizations/
At this rate though, we might all be using solid state drives before the kernel developers stop pointing at userspace as the problem and implement it for Linux.
"CW" == Colin Walters walters@redhat.com writes:
CW> At this rate though, we might all be using solid state drives CW> before the kernel developers stop pointing at userspace as the CW> problem and implement it for Linux.
Well, I have a machine running entirely from SSDs (two 32GB PATA Samsungs, /boot is mirrored and the rest is striped). It still doesn't boot all that quickly (stock F-7); if someone has tests for me to run I'll be happy to try them. (Actually I'm only getting 57MB/s reads from the striped volumes, so I think there's some tuning to do as that looks like the single-drive speed.)
- J<
On Fri, Jul 06, 2007 at 04:03:09PM -0500, Jason L Tibbitts III wrote:
Well, I have a machine running entirely from SSDs (two 32GB PATA Samsungs, /boot is mirrored and the rest is striped). It still
If you run the system entirely from a giant RAM disk (eg a gigabyte i-RAM) you'll rapidly discover that the disk isn't the biggest bottleneck
to run I'll be happy to try them. (Actually I'm only getting 57MB/s reads from the striped volumes, so I think there's some tuning to do as that looks like the single-drive speed.)
Disk performance is usually quantified in ops/second. If you read linear data you'll get 57MB/sec. Go to true random access and performance is way way lower and hasn't improved much in the past ten years.
Alan
"AC" == Alan Cox alan@redhat.com writes:
AC> Disk performance is usually quantified in ops/second. If you read AC> linear data you'll get 57MB/sec. Go to true random access and AC> performance is way way lower and hasn't improved much in the past AC> ten years.
Well, these aren't disks (essentially zero seek time), so I'd expect random access read performance to be extremely fast. Random writes, on the other hand, should be terrible. I didn't have a lot of time to benchmark this before heading home so I just ran bonnie++ but I left off some options and the second line of results was entirely filled with plusses.
But anyway, I was merely responding to the comment about SSDs rendering this discussion moot. Obviously disk access speed is a factor in startup time, but I don't think it's anything close to the dominating factor.
- J<
On Fri, Jul 06, 2007 at 08:26:07PM -0500, Jason L Tibbitts III wrote:
But anyway, I was merely responding to the comment about SSDs rendering this discussion moot. Obviously disk access speed is a factor in startup time, but I don't think it's anything close to the dominating factor.
From testing with an i-RAM I would concur with this statement. There are cases
it does appear to be significant like big software builds but booting is other load
On Fri, Jul 06, 2007 at 04:55:47PM -0400, Colin Walters wrote:
Possibly, but I think the biggest speedup by far is the disk caching/reorganization that both Windows and OS X do: http://www.kernelthread.com/mac/apme/optimizations/
At this rate though, we might all be using solid state drives before the kernel developers stop pointing at userspace as the problem and implement it for Linux.
We already lay stuff out very carefully and precache. Unfortunately most of the mess *is* userspace and some of the userspace authors are in complete denial. Just profile the number of file opens of different files done in a gnome startup and when you've finished laughing you can weep.
Years ago I sent the gnome team a library that could load and linearlly map the entire theme in about 3 syscalls coming out nicely on disk. They never used it.
That isn't to say the kernel is perfect and there is a ton of optimising work still going on, different scheduling algorithms and the like but most of the slowness is from user space - some from tools, some from combinations of tools and kernel (eg linker and paging patterns) and a lot of it from sheer stupid clueless design of applications and especially of GUI libraries.
Alan
On 7/6/07, Alan Cox alan@redhat.com wrote:
On Fri, Jul 06, 2007 at 04:55:47PM -0400, Colin Walters wrote:
Possibly, but I think the biggest speedup by far is the disk caching/reorganization that both Windows and OS X do: http://www.kernelthread.com/mac/apme/optimizations/
At this rate though, we might all be using solid state drives before the kernel developers stop pointing at userspace as the problem and implement it for Linux.
We already lay stuff out very carefully and precache. Unfortunately most of the mess *is* userspace and some of the userspace authors are in complete denial. Just profile the number of file opens of different files done in a gnome startup and when you've finished laughing you can weep.
Years ago I sent the gnome team a library that could load and linearlly map the entire theme in about 3 syscalls coming out nicely on disk. They never used it.
That isn't to say the kernel is perfect and there is a ton of optimising work still going on, different scheduling algorithms and the like but most of the slowness is from user space - some from tools, some from combinations of tools and kernel (eg linker and paging patterns) and a lot of it from sheer stupid clueless design of applications and especially of GUI libraries.
Speaking from my many days as a performance analyst, Pfeifer's first rule of performance says "The only good I/O is one you don't do".
I'll second Alan here. The OS can mitigate the effects of I/O but userspace has the onus to be reasonable.
darrell
On Fri, 2007-07-06 at 17:31 -0400, Alan Cox wrote:
On Fri, Jul 06, 2007 at 04:55:47PM -0400, Colin Walters wrote:
Possibly, but I think the biggest speedup by far is the disk caching/reorganization that both Windows and OS X do: http://www.kernelthread.com/mac/apme/optimizations/
At this rate though, we might all be using solid state drives before the kernel developers stop pointing at userspace as the problem and implement it for Linux.
We already lay stuff out very carefully and precache.
What precaching?
Ok, I just googled and found a really good thread: http://kerneltrap.org/node/2157
It's unclear to me though how much of this is actually running now.
The MacOS X page I linked to claimed that when they disabled the filesystem boot cache, it almost doubled boot time (in other words, the cache halved boot time). That says to me that either
1) Our precaching isn't as good as their implementation 2) MacOS X is fundamentally different
I think it's more likely to be 1. I could be wrong though.
Years ago I sent the gnome team a library that could load and linearlly map the entire theme in about 3 syscalls coming out nicely on disk. They never used it.
I believe the icon cache is currently mmap'd: /usr/share/icons/hicolor/icon-theme.cache
Whether it's your actual code or not I don't know.
Unfortunately most of the mess *is* userspace
Right. That doesn't change the fact that some kernel work seems very likely to speed things up quite a bit.
and some of the userspace authors are in complete denial. Just profile the number of file opens of different files done in a gnome startup and when you've finished laughing you can weep.
Work continues on improving userspace: http://primates.ximian.com/%7Efederico/news-2007-06.html#26
On Fri, Jul 06, 2007 at 05:56:35PM -0400, Colin Walters wrote:
Ok, I just googled and found a really good thread: http://kerneltrap.org/node/2157
It's unclear to me though how much of this is actually running now.
Quite a bit in terms of the disk layer. We readahead and writebehind, we preallocate data so that trickling writes don't end up smeared around the disk
- Our precaching isn't as good as their implementation
- MacOS X is fundamentally different
I think it's more likely to be 1. I could be wrong though.
Try running FC7 off a Gigabyte i-RAM (you'll need the approved RAM and a recent firmware for it to be reliable). You can then seperate the disk and non-disk load
I believe the icon cache is currently mmap'd: /usr/share/icons/hicolor/icon-theme.cache Whether it's your actual code or not I don't know.
Hooray, and I hope mapped shared across all users.
Unfortunately most of the mess *is* userspace
Right. That doesn't change the fact that some kernel work seems very likely to speed things up quite a bit.
Yes. There are things we can do such as better predicting what I/O is coming and what data not to write yet (and when to defer writing for reading). There are things we can't do like make the old gconf one key per file implementation not suck.
complete denial. Just profile the number of file opens of different files done in a gnome startup and when you've finished laughing you can weep.
Work continues on improving userspace: http://primates.ximian.com/%7Efederico/news-2007-06.html#26
Which is good.
One area that probably could do with more work (although openoffice has forced many improvements ;)) is the linker and ELF loader side, both from the user space point of view for performance, and the kernel point of view to reduce page fault counts and between the two of them to load data in a sane order when doing demand paging (actually for large systems I question these days how useful demand paging is for most apps)
On Fri, 2007-07-06 at 18:17 -0400, Alan Cox wrote:
On Fri, Jul 06, 2007 at 05:56:35PM -0400, Colin Walters wrote:
Ok, I just googled and found a really good thread: http://kerneltrap.org/node/2157
It's unclear to me though how much of this is actually running now.
Quite a bit in terms of the disk layer. We readahead and writebehind,
As I understand the readahead, it is simply requesting further blocks inside a single file before the read() requests come in for them.
The Hot File Clustering system on that page actually moves the files into a special area of the disk and and ensures they're contiguous (thus avoiding seeks, which plain readahead doesn't really solve).
Both OS X and Windows include systems which watch the startup and continually optimize. Looking at Fedora, we have the "readahead" package, but as far as I can tell it's static in the sense that we ship some definitions with the package; it's never rerun.
Has anyone run a current version of http://bootchart.org/ on Fedora?
Colin Walters wrote:
On Fri, 2007-07-06 at 18:17 -0400, Alan Cox wrote:
On Fri, Jul 06, 2007 at 05:56:35PM -0400, Colin Walters wrote:
Ok, I just googled and found a really good thread: http://kerneltrap.org/node/2157
It's unclear to me though how much of this is actually running now.
Quite a bit in terms of the disk layer. We readahead and writebehind,
As I understand the readahead, it is simply requesting further blocks inside a single file before the read() requests come in for them.
The Hot File Clustering system on that page actually moves the files into a special area of the disk and and ensures they're contiguous (thus avoiding seeks, which plain readahead doesn't really solve).
Both OS X and Windows include systems which watch the startup and continually optimize. Looking at Fedora, we have the "readahead" package, but as far as I can tell it's static in the sense that we ship some definitions with the package; it's never rerun.
you can boot with "init=/sbin/readahead-collector" but for me this ended up in readahead to many files... which caused a high diskload during boot which results into a longer boot time.
Alan Cox (alan@redhat.com) said:
That isn't to say the kernel is perfect and there is a ton of optimising work still going on, different scheduling algorithms and the like but most of the slowness is from user space - some from tools, some from combinations of tools and kernel (eg linker and paging patterns) and a lot of it from sheer stupid clueless design of applications and especially of GUI libraries.
Just as a data point, in my 57-second bootup timings, a full 10.5 seconds of that is before init - so there is plenty of places to improve all around.
Bill
Bill Nottingham schrieb:
Bill Nottingham (notting@redhat.com) said:
- where are the benchmarks? What's the actual gain?
Not seeing any other benchmarks, I decided to test this.
Fairly standard box - P4, ata_piix, 1G memory. Stock desktop install, fully up to date with updates and updates-testing as of this afternoon.
A 'normal' boot to gdm is about 56.9 seconds. I installed prcsys, and edited the startup scripts to add LSB dependencies as attached.
I then booted with prcsys and parallel init. The new boot time was... 56.3 and 56.6 seconds.
With:
S12syslog S13ip6tables S13iptables S14network S25netfs S26auditd S26messagebus S27setroubleshoot S55sshd S98haldaemon S99local
/etc/rc.d/rc takes: 8.5s in normal mode 6s in parallel startup
So, for all this work, we get a 0.6%-1.1% speedup.
so for me thats 140% speedup :)
Oh, and we get 62 AVCs from SELinux in the process. What's the point of this again?
for that, I now have a fix.
- how would this be useful for the case where facilities that are provided are determined at runtime (say, NetworkManager providing $network instead of /etc/init.d/network, or $remote_fs being provided by either rc.sysinit or /etc/init.d/netfs, depending on configuration).
yep, tbd
Similarly, you may want a meta-dependency for 'authorization available', which would be at different times depending on whether or not you're using local passwords, KRB5, etc.
yep, tbd
- does this work with dbus system activation?
yep, tbd
I also don't see how it handles either of these.
Bill
Which system does fullfill all of these requirements yet?
Harald
On Fri, Jul 06, 2007 at 06:18:04PM +0200, Harald Hoyer wrote:
With:
S12syslog S13ip6tables S13iptables S14network S25netfs S26auditd S26messagebus S27setroubleshoot S55sshd S98haldaemon S99local
/etc/rc.d/rc takes: 8.5s in normal mode 6s in parallel startup
This seems suspicious to me. A lot of the above isn't really parallelisable (or shouldn't be).
network->ip[6]tables-> sshd|netfs should be serialised. Also afaik, haldaemon can't start until messagebus has run setroubleshoot is also serialising on auditd iirc.
So what's actually causing that 2s difference? Is it possible that you just got a dhcp reply quicker the second time?
Dave
Harald Hoyer (harald@redhat.com) said:
With:
S12syslog S13ip6tables S13iptables S14network S25netfs S26auditd S26messagebus S27setroubleshoot S55sshd S98haldaemon S99local
/etc/rc.d/rc takes: 8.5s in normal mode 6s in parallel startup
Did you add all the proper dependencies?
So, for all this work, we get a 0.6%-1.1% speedup.
so for me thats 140% speedup :)
2.5 seconds is 140%? I can try a more minimal setup, but I think we should be concentrating on the normal case.
Oh, and we get 62 AVCs from SELinux in the process. What's the point of this again?
for that, I now have a fix.
Got code? I'm interested in how much time is lost in the audit/setroubleshoot logger logging the AVCs.
- how would this be useful for the case where facilities that are
provided are determined at runtime (say, NetworkManager providing $network instead of /etc/init.d/network, or $remote_fs being provided by either rc.sysinit or /etc/init.d/netfs, depending on configuration).
yep, tbd
Similarly, you may want a meta-dependency for 'authorization available', which would be at different times depending on whether or not you're using local passwords, KRB5, etc.
yep, tbd
- does this work with dbus system activation?
yep, tbd
I also don't see how it handles either of these.
Which system does fullfill all of these requirements yet?
None, yet. But from looking at the prcsys architecture I'm not sure *how* it would do any of those - it seems to be designed in such a way to make that hard.
Bill
Bill Nottingham schrieb:
Got code? I'm interested in how much time is lost in the audit/setroubleshoot logger logging the AVCs.
Should be in rawhide... # yum install prcsys
Btw, I am not interested in quenching the last seconds out of the system. My primary interest is in getting dependencies in the initscripts and adding the static dependencies is the first step.
Harald Hoyer (harald@redhat.com) said:
Should be in rawhide... # yum install prcsys
Will check.
Btw, I am not interested in quenching the last seconds out of the system. My primary interest is in getting dependencies in the initscripts and adding the static dependencies is the first step.
That's fine... I'm just concerned about using prcsys if it doesn't actually show much benefit.
Bill
On Mo Juli 9 2007, Jaroslaw Gorny wrote:
Friday 06 of July 2007 18:18:04 Harald Hoyer napisał(a):
(...) With:
S12syslog (...) S99local
/etc/rc.d/rc takes: 8.5s in normal mode 6s in parallel startup
so for me thats 140% speedup :)
Hmm..... ((8.5-6)/8.5)*100 = 29.4%
You calculated the the reduction of boottime.
I would calculate the speedup this way:
Bootspeed = 1/(Time to boot)
normal speed = 10/85 parralel startup speed = 10/60
Speedup in percent: 10/60 / 10/85 * 100 = 141.67 %
Regards, Till
Till Maas wrote:
On Mo Juli 9 2007, Jaroslaw Gorny wrote:
Friday 06 of July 2007 18:18:04 Harald Hoyer napisał(a):
(...) With:
S12syslog (...) S99local
/etc/rc.d/rc takes: 8.5s in normal mode 6s in parallel startup
so for me thats 140% speedup :)
Hmm..... ((8.5-6)/8.5)*100 = 29.4%
You calculated the the reduction of boottime.
I would calculate the speedup this way:
Bootspeed = 1/(Time to boot)
normal speed = 10/85 parralel startup speed = 10/60
Speedup in percent: 10/60 / 10/85 * 100 = 141.67 %
Speed is 141.67 % of the old speed which gives a 41.67 speedup. The old speed is the reference aka 1 aka 100%.
140% speedup is more than twice as fast.
Florian
On Fri, 2007-06-22 at 17:18 +0200, Harald Hoyer wrote:
Hello,
some of you may have read some wiki pages about the plans for the new init system [1]. As a first step in this direction [2], I packaged prcsys from Mandriva, patched initscripts with a very small patch, and uploaded the src.rpm to [3]. To enable parallel booting just build and install both packages and edit /etc/sysconfig/init. Set PARALLEL_STARTUP=yes and there we go.
Harald, I had a quick look over the prcsys code, here are some initial impressions
- No test suite
- Calls system() without checking for error
- Doesn't check for $ in facility names
- Doesn't differentiate between Required and Should, ie. Should is treated as a hard requirement too, which seems to violate the LSB intent
- Checks for mandriva flags, X-Mandriva-Interactive, X-Mandriva-Compat-Mode
- Uses the same fixed length of 256 for facility names and paths
- There is a bunch of other hardcoded string lengths in there, like char file[255], this should probably be cleaned up
- I regularly see "Cannot create temporary file" in --test output
- Some init script output seems to go into the log file (might be related to the previous point)
- It calls exit(1) a bunch - it probably shouldn't
- Typo in message: "Unknow mode"
- This looks sneaky, with buflen being a function parameter: char temp_dep[2][buflen];
All of this is fixable of course, this is after all just 1400 lines of code. I guess the question is if Mandriva is willing to develop this as a cross-distro project. If yes, where is the bug tracker to file bugs and patches about these problems ? If not, I don't see us winning too much by reusing 1400 lines of mediocre C code.
Anyway, here are what I think are the top priorities if you want to turn this into an rc implementation suitable for Fedora and RHEL:
- Add a test suite. I would expect at least tests for the parsing of the LSB headers, for the construction of the dependencies graphs, for correct serialization of the dependencies, handling of missing soft dependencies, and tests for compat mode
- Implement Should
- Do a thorough audit of the code and make it handle errors carefully and systematically
Matthias
Matthias Clasen wrote:
... All of this is fixable of course, this is after all just 1400 lines of code. I guess the question is if Mandriva is willing to develop this as a cross-distro project. If yes, where is the bug tracker to file bugs and patches about these problems ? If not, I don't see us winning too much by reusing 1400 lines of mediocre C code.