aeolus configure Summary: A high level parameterized utility to install and configure various aeolus components Goals: - Provide a high level interface which to allow system administrators and developers to install and configure various aeolus components in a multi-machine environment, including: - deltacloud core - conductor - image warehouse (iwhd) - image builder - database (postgres) [MO: - pulp] [MO: - kalpana (?)] - Provide a high level interface which to initialize and import cloud data for use in aeolus, including: - cloud providers - cloud provider accounts - existing and/or to-be-built templates, images, and deployments - instances [MO: be careful how much gets included here. We need to balance speed of running vs functionality very carefully. In a lot of cases, I'd punt things to make startup faster. (image bin data in particular)] - Provide simple command line utilities wrapping this interface to allow system administrators to select which components to install, where to install them, and configure data to be present on installation [MO: Look at external nodes here. Might be better to replace the high-level .pp files and replace it with .yml to give an easier way to specify what a machine looks like or describe a series of machines. See the node/nodes stuff from: http://git.engineering.redhat.com/?p=users/olevy/puppet-imagebuilder/.git;a=tree;h=master;hb=master] - Take care of all of the low level details of setting up the various aeolus components, including but not limited to setting up the platform needed to run aeolus, configure the communication mechanisms between components, verifying security and other policies are in place to allow component operation, and printing any error or warning messages and gracefully terminating if anything fails - Provide a _single_ simplified command to install and completely configure aeolus from scratch including but not limited to: - setting up the correct repositories - installing the aeolus components - configuring and running the necessary services - prompting the user for cloud account credentials, instance details, and other seed data [MO: This one is debatable. I'm ok with specifying stuff in a conf file, or paramterizing calls to a REST api, but I am not so sure we want to put much effort in beyond that.] - Make all this functionality available via Puppet so as to accurately represent aeolus dependencies on a multi-machine aeolus install and to be able to be pulled into existing puppet deployments Use Cases: - Bob the developer has checked out all the aeolus components from the source git, has built and installed all the rpms, and simply wants a way to setup a default aeolus configuration. He runs aeolus-configure which makes sure the correct components are inplace, sets up the default config and he's good to go - Sally the sys admin has existing cobbler and puppetmaster servers running and wants to provision some additional machines to run the image builder and warehouse. She imports the aeolus module into her puppet recipe and uses it to setup/install/configure iwhd and imagefactory. She then uses it to automatically create and deploy a few templates/images to be available upon installation right within her aeolus configuration recipe. [MO: Not clear that we should be pre-seeding templates/images. We probably need clarity and definition around this.] - Joe is a relatively new cloud user who has signed up for EC2 and Rackspace via their corresponding web sites. He wants to use the same tooling to deploy instances for both but wants to do so in the most simple fashion. He installs aeolus-configure via 'rpm -ivh http://' and runs aeolus-install. This prompts him for his cloud credentials and then proceeds to setup the yum repositories, install the packages, and configure all aeolus components locally, automatically setting up the specified providers, and importing templates/images data. He can then login to the conductor ui and with one or two clicks, launch instances. [MO: We should bring this one up with ansmith/hewbrocca to see if we are really aiming for this long term or not. If so, that's fine and my earlier comments about prompting are probably non-issues.] [MO: Need clarity on whether aeolus-configure supports aeolus-conductor or vice versa. The answer to that changes the expectations of this use case quite a bit.] - Janet has a few machines locally which she wants to use to run various aeolus components that are able to work with local existing security services and with an image builder and factory already setup in the cloud. She creates a puppet recipe on her local configuration server, pulling in aeolus-configure and creates profiles for her various machines w/ whatever aeolus subcomponents are to be installed on them. Upon installation, the aeolus recipe uses the ip address of each machine each that component is installed on to configure the communication channels [MO: Maybe a later use case?] - Michael wants to creating tooling around the aeolus api but is not sure the exact environment which his tooling will be deployed on or where in the world it will be deployed to. He uses aeolus-configure to install and configure in various environments, allowing him to parameterize package sources and installation, security configuration (or lack thereof), which seed data gets automatically created, documenation that gets installed, etc [MO: Not sure how this is a different use case.] Current Design: - The aeolus configure project ships with - A few puppet modules: - aeolus configuration module defining classes and functions to setup the aeolus components and seed data - apache (httpd) config module - ntp config module - openssl / security config module - postgres config module - These can be pulled into an puppet recipe running anywhere (either locally via the puppet command or on a puppetmasterd server) to configure aeolus in any number of ways on any number of machines - aeolus configure/cleanup puppet manifests - These use the various modules to completely setup/remove aeolus on a local machine. These include the classes defining all the necessary aeolus subcomponents, and invoke the aeolus configuration methods to define providers, templates, and other data to be available immediately on startup [MO: My opinion is that cleanup should stop at removing data from the db, condor jobs, etc. ] [MO: We need to consider what is acceptable to seed & what needs to be user defined per installation.] - These can be run locally via the 'puppet' cmd or via a puppetmaster to install aeolus on any given single machine - These can be used as the basis for other scripts to setup/configure aeolus in any number of environments including those with a provisioned cluster, those in a environment requiring fewer aeolus subcomponents, those in environments w/ packaging/security/other restrictions, etc - binaries / scripts - These are simply wrappers around the 'puppet' command, setting up the correct module path and loading the aeolus configure/cleanup puppet manifests to provide a simple means to install aeolus locally [MO: clean restart of services script should be included in the rpm if we are saying that configure shouldn't restart services or is too heavyweight to be used in all cases.] - The aeolus-configure rpm ships with all these components. It is currently pulled into aeolus-conductor as a runtime dependency with the other aeolus components, having the drawback that this does not allow for the seperate multi-machine installation of the aeolus components High Level Reqs: - must provide a means wich to configure/cleanup all the aeolus components and dependencies on a single machine, including: - conductor, core, iwhd, image builder, condor - postgres, libvirt, qpid, mongodb [MO: This is currently default behavior.] - must provide a means which to configure/cleanup all the aforementioned components on multiple machines, ensuring communication and interoperability between them [MO: This should probably only be done in an environment that is setup to run via a puppetmaster. I'd consider it outside the scope of aeolus-configure itself to provide communcation between multiple boxes, but we get it for free if we are using a puppetmaster setup.] - must provide a means which to initialize aeolus seed data [MO: ] - must provide a parameterized interface through which to specify the seed data that is initialized - must provide a means which to install/remove aeolus components, specifying alternative sources to retrieve from (yum and git repositories, local fs) - must provide a means which to toggle various optional aeolus features - package installation and removal, package sources - security features (for export restrictions) - logging levels / destinations - which components are configured (for the command line binaries) - must make all functionality available via command line utilities and a puppetmaster server - provide a fully functional test suite and complete documentation Tasks: - The aeolus modules are close to supporting a multi-machine install as they are. Some tweaking will be needed around component communication, and we may want to ship additional component-specific configure/cleanup manifests. [MO: The level of compentization is pretty reasonable currently. The bigger issue is figuring out a good way to specify what classes/parameters to include on a per-machine basis.] - Flags should to the command line binaries to specify which components are to be configured/cleaned up as well as toggling security features and log levels [MO: This can probably be handled much more elegantly by using external node classifications, allowing each node defintion to specify which puppet classes belong on the node as well as parameterizing values on a per node basis. This will avoid making the scripts overly complicated.] - Additional functions for seed data initialization need to be created including those for templates, images, provider accounts, and instances [MO: I'm not as sold on this. Depending on the rev of imagefactory we use this can add a lot of time to the configuration. We (the product owners) need to define if the goal of configure is to provide a minimal working installation or if we expect to include the things mentioned above.] - A seperate command line utility should be built to prompt the user for aeolus configuration including providers account credentials, template packages, hwp information, etc. The result of this should be a yaml file which can be loaded into the aeolus recipe to autocreate those entities [MO: I think this largely describes a tool to create valid external node defitions in yml] - Seed data creation needs to be made more robust, return codes and status should be parsed out of the conductor http response and analyzed to determine operation status - ProviderType should be parameterized to allow yum / git providers, and the recipe should provide the means which to install these repositories in a manner which the components can be installed from [MO: I believe you mean Package providers? This should also probably allow for gem installation/management at some point.] - Greatly expand aeolus configure test suite and documentation, setup a builtin test harnass, for local automated e2e testing in a vm