On Thu, Oct 31, 2013 at 11:26:25AM -0400, Josh Boyer wrote:
Case 1 is easy -- no kernel, no problem.
Or, perhaps more accurately, that case is covered by either someone else's kernel (which is SEP), or the standard kernel found in Workstation and Server.
Yes.
Case 2 is everything needed to boot and get network, console output, and normal storage under KVM, Xen (especially as used in EC2), VirtualBox, and VMware. (With priority to the first two.) This *could* be split further, making a distinction between cloud providers, but there's diminishing returns for effort.
I'm going to be blunt. VirtualBox and VMware aren't really focal points for the kernel team, for exact opposite reasons. VirtualBox
Yeah, that's exactly why I put "with priority to the first two". I'm totally happy with making that statement stronger / more clear.
Case 3 covers things like PCI passthrough or running a remote desktop where you want virtual sound card support. For this, I think it's perfectly fine to say "add the extra drivers pack".
By which you mean admins manually (or via some tool like puppet/chef/ansible) installs the subpackage, correct? Not "we create a special cloud image with the drier subpackage already included".
Probably manually, but I think we're still working that out as part of our overall direction.
Case 4 could use a bit more discussion. *Mostly*, I think we can either say that this is the same as case 3 or that we will just use whatever Fedora Server does in this case (if different). However, I know oVirt Node (and probably also OpenStack node) is concerned with image size on bare metal. This would be a good time for anyone interested in that as a focus to chime in.
OK. I literally have no idea how this is different from a minimal server install, so understanding that would be good.
One difference is provisioning via an image vs. anaconada + kickstart.
Feel free to CC me. I'm subscribed to the cloud list now, but I can't say I'll have time to fully pay attention to it. Please drag me (or someone else on the kernel team) into specific things if you think you need to.
Thanks. We will take you up on that.
Main drivers are network traffic, provisioning speed, and density. With probably a smidgen of marketing thrown in.
So the thinking is smaller size means less to transfer, faster to boot, cheaper to store? I can see the first one. The second one is mostly either going to be in the noise range or just false. The third one I don't buy.
Less to transfer is the one that's probably always going to be a meaningful issue, at least in our lifetimes.
Size affects provisioning time because depending on the iaas software, the image is often be completely copied to a new file possibly on a different filesystem. Or on a different machine.
This could be considered a subset of 'network traffic' in some ways, but I'm separating it out because the first includes things like "we don't want to have a Fedora image available by default in $iaasdistro because it's too big to be part of a normal install".
I agree that density is probably the least important because in situations where that's a big concern there are other ways to address it (like deduplication).
Anyone else feel free to chime in if I'm off-base here. :)
Now that's all basically image (as in file) size. What about runtime overhead of the kernel? The server group is likely going to want things like NR_CPUS to be larger than it is today, which incurs some runtime memory usage overhead. It isn't huge, but it would be good to know what common provisioning is in the cloud environments you're targeting in terms of memory.
Good question and I don't have a ready answer. Will keep this in mind as we go through the PRD process.