On Tue, Aug 22, 2017 at 2:40 PM Michal Novotny <clime@redhat.com> wrote:
I would like to publicly note that I had completely different idea about this project first time I have encountered it at the last Flock.

My idea was that the project will target runtime rather than build-time and will try to provide a set of packages that would be able to
e.g. spawn and run an httpd stack almost at one command.

So that we will basically have e.g. this functionality: https://hub.docker.com/r/p0bailey/docker-flask/ but integrated directly in
distribution.

This would make lots of sense to me because it would work similarly to how upstream projects get
into Fedora (and RHEL) eventually. They need to pass certain criteria for being accepted and then
there is is a group of people that maintains it in the given operating system.

Similarly for the modularity (as I saw it), there would be e.g. a docker image that would slowly make
it into being a full-fledged system component that can be run in a container or even natively on the host.

The concrete implementation would probably include ansible where the modules (being standard rpms)
would together make a big ansible playbook archive (e.g. placed under /etc/playbook) that would include
default templates describing default sysadmin use-cases (like importing db and web application start)
that could be overridden by a custom user template being placed at the same subpath under e.g. /etc/custom-playbook.

This would be a killer tool for sysadmins that could script their common use-cases in a simple manner and completely
avoid any nerve-wrecking direct config manipulations on a running system that (almost) each of us is familiar with. It would actually be
a common sysadmin (and power-user) language on RHEL based distros and I think you can imagine this can be pretty cool.

This approach would be also very flexible because you could define your resources that you want to launch individual module
components on by altering a default resource setup and switch from spawning everything natively to spawning everything in docker
containers locally to spawning everything in docker containers in OpenShift staging instance to spawning everything in OpenShift production instance.

This also makes lots of sense from the human-effort point of view. We could apply very well our sysadmin expertise in a certain
areas and transmit it directly to our users through code that we would have written. Packagers would be config masters and they
would maintain default configuration to provide user with the most effortless way to make something run (e.g. dovecot + postfix combo).

And we could then also maintain custom playbooks for specific customers meaning we would be able to create system cut directly to
their needs and have means to maintain it at large scales. And this would basically mean having an rpm somewhere being named after
our customer e.g. mailserver-intel with a set of specific intel mailserver ansible playbooks.

After time I started to understand that the focus of modularity is different and it rather focuses on having tight control over
the build process and lifecycle syncing across different system components.

I am not saying this is wrong or anything. It probably just means I was mislead in my expectations but still I would like to share this
particular point of view as it makes lots of sense to me and it is something that makes me excited.

Sorry for not being technical or anything. You can consider this to be a short story that doesn't deserve much attention.
clime

...Oh yeah, I started very basic PoC at https://pagure.io/lamp and needed to postpone it a bit. It might be worth continuing

So.. in short, I think this is a cool idea. Also, I think this is the kind of thing that modularity could enable. 

We had in the game plan, way back, allowing super sophisticated "post install scripts" as part of the install profile concept. Basically, we had the idea that you could have an ansible playbook as part of the install profile. However, it is just on the "someday" list right now. 

You could, very easily, have streams or install profiles per target audience, you can (in theory) do this now, but, we don't want to have the branching and the like get out of control until we actually understand how this stuff is all going to come together. So, you could do it in COPR, once module builds come online, but perhaps not in dist-git.

We (fedora, not just modularity) also have been looking/implementing "system containers[1]" which are kind of like a "native feeling containerized application". Containers are already being built (sorta, need the infra to be finished) from modules providing more flexibility by allowing simple stream changing without modifying the dockerfiles.

So, I think your idea is exactly what we want the modularity flexibility for. While the core concept of modularity is being production-alized, the whole *point* is that it let's us more easily try new styles of application and os distribution. I would say, as the modularity work becomes completely mainstream, I think you could propose adding "fancy install profiles" and the other things you need to make your concept real. 

<shameless self-promotion>
I have a lot of ideas of what we *could* do next with the flexibility provided by modularity and how we could be even more flexibile. Come to my talk[2] at flock to hear about them (or watch the recording ;) ).
</>

Langdon

[1]: https://fedoraproject.org/wiki/Container:System_Container 
[2]: https://flock2017.sched.com/event/Bm91/modularity-the-future-building-and-packaging
 
_______________________________________________
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-leave@lists.fedoraproject.org