RFC: Script development

Jan Synacek jsynacek at redhat.com
Wed Jun 12 10:20:14 UTC 2013


On 06/11/2013 10:52 PM, Jan Safranek wrote:
> Hi,

Hello!

> 
> I've been talking to Stephen Gallagher how to proceed with client script
> development in lmishell. The goal is to provide high-level functionality
> to manage remote systems without complete knowledge of the CIM API.
> 
> We agreed that:
> 
> - we should provide python modules with high-level functions
> 
> (we were thinking about nice classes, e.g. VolumeGroup with methods to
> extend/destroy/examine a volume group, but it would end up in
> duplicating the API we already have. We also assume that our users are
> not familiar with OOP).

Define "our users". Are they admins that will use the scriptons from python
scripts? If yes, I think that degrading the CIM API from OOP to pure procedural
just doesn't sound right. Or maybe I'm just misunderstanding something here.

> 
> - these python functions try to hide the object model - we assume that
> administrators won't remember association names and won't use e.g.
> vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list
> of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We
> will expose CIM classes and properties though.
> 
> - these python functions are synchronous, i.e. they do stuff and return
> once the stuff is finished. They can do stuff in parallel inside (e.g.
> format multiple devices simultaneously) but from outside perspective,
> the stuff is completed once the function returns.

What about leaving an option for functions that are asynchronous to run
asynchronously? Or create both async and sync version of such functions.
Because, again, forcing something that can be run asynchronously into a
synchronous mode is degrading what we already have, needlessly. IMO we shouldn't
make things more simple than simple.

> 
> (we were thinking about python functions just scheduling multiple
> actions and do stuff in parallel massively, but we quickly got into lot
> of corner cases)
> 
> - each high-level function takes a LmiNamespace parameter, which
> specifies the WBEM connection + the namespace on which it operates
> -> i.e. applications/other scripts can run our functions on multiple
> connections
> -> if the LmiNamespace is not provided by caller, some 'global' one will
> be used (so users just connect once and this connection is then used for
> all high-level functions)

Having two extra parameters for each function sounds like a huge API bloat. I
think that having some 'global' one, i.e. some kind of a state object that the
underlying layer (lmishell?) would use, would be better.

> 
> - we should probably split these high-level function to several modules
> by functionality, i.e. have lmi.networking and lmi.storage.vg,
> lmi.storage.lv etc.
> 
> - it should be easy to build command-line versions for these high-level
> functions
> -> it is not clear if we should mimic existing cmdline tools (mdadm,
> vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks
> the same like creation of a VG)
> 
> - we should introduce some 'lmi' metacommand, which would wrap these
> command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and
> 'lmi ip addr show'. It's quite similar to fedpkg or koji command line
> utilities.
> 
> - 'lmi' metacommand could also have a shell:
> $ lmi shell
>> vgcreate mygroup /dev/sda1 /dev/sdb1
>> ip addr show

I would go with the metacommand style (ala virsh).

> 
> I tried to create a simple module for volume group management. I ran
> into several issues with lmishell (see trac tickets), attached you can
> find first proposal. It's quite crude and misses several important
> aspects like proper logging and error handling.
> 
> Please look at it and let us know what you think. It is just a proposal,
> we can change it in any way.
> 
> Once we agree on the concept, we must also define strict documentation
> and logging standards so all functions and scripts are nicely documented
> and all of them provide the same user experience.
> 
> Jan
> 
> P.S.: note that I'm out of office for next week and with sporadic email
> access this week.

As for the logging, maybe use something similar to the logging decorators we now
use in openlmi-storage? They would tell the lmishell (which I suppose would be
used as an 'interpreter' for the scriptons) if it should log somehow or not.
That would make it easier to create a centralized logging policy/style/output.

--
Jan Synacek
Software Engineer, Red Hat


More information about the openlmi-devel mailing list