Development plan for dynamic config and back-end changes

Jeff Darcy jdarcy at redhat.com
Wed Oct 6 19:31:08 UTC 2010


I've been thinking about how to implement the "dynamic configuration"
task, which as a side benefit would eliminates our dependency on the
Jansson library.  One of the trickier parts is how this task interacts
with another task - modifying the replication code to use the same
back-end code for secondary stores as is already used for the primary
store.  The resulting plan involves a couple of "detour" steps that
should have no visible impact, but that enable subsequent steps.
Obviously, testing will be key.  The just-implemented replication tests
can give us some coverage, and manual tweaking of those tests (e.g. to
point our primary FS-backed depot at a manually configured S3-backed
depot) will give us more.  Fully automating the tests across all
possible back ends would be a major pain, dealing with credentials and
autostart and all sorts of other cruft, so I'd really rather keep that a
separate task; by February the two should converge so that we can ensure
coverage of all the changes.  So, here's the basic plan.

(1) Generate provider_t structures from the JSON blob at startup,
instead of on demand from get_provider (which becomes just a lookup
function).

(2) Switch all of the replication code to use the semi-permanent
provider_t structures instead of JSON accessors.

(3) Split the replication and configuration parts of proxy.c into two
separate modules, with the JSON blob hidden behind the module boundary
or discarded entirely once the provider_t structures have been generated.

(4) Add interfaces to add/remove provider_t structures dynamically, and
drive those from a script in a more JSON-friendly language instead of
parsing ourselves.

(5) Add a back-end dispatch-table pointer to provider_t, populate in the
config module.

(6) Make all of the backend.c functions take a provider_t argument (like
register_func already does) and get credentials etc. from there instead
of from globals.

(7) Convert all of the primary-store functions in rest.c to pass the
provider_t for the primary store to back-end functions.

(8) Convert all of the secondary-store functions in the newly-split
replication module to use the backend.c functions with appropriate
provider_t arguments.

(9) Create a proper CloudFiles/OpenStack back-end based on the code
currently stranded in the replication module.

(10) Create other back-end modules (EBS, RHEV, VMware) without having to
worry about front-end/replication/config glop.

Any thoughts?  I think step 4 could be done later, or 9 could be done
earlier.  Pauses for integration or extra testing could also occur at
the conclusion of steps 3 or 8.  None of this is rocket science; it just
needs to get done and be done carefully.


More information about the iwhd-devel mailing list