RFC: Script development

Jan Synacek jsynacek at redhat.com
Wed Jun 26 13:39:13 UTC 2013


On 06/21/2013 02:28 PM, Stephen Gallagher wrote:
> On 06/12/2013 06:20 AM, Jan Synacek wrote:
>> On 06/11/2013 10:52 PM, Jan Safranek wrote:
>>> Hi,
> 
>> Hello!
> 
>>>
>>> I've been talking to Stephen Gallagher how to proceed with client
>>> script development in lmishell. The goal is to provide high-level
>>> functionality to manage remote systems without complete knowledge
>>> of the CIM API.
>>>
>>> We agreed that:
>>>
>>> - we should provide python modules with high-level functions
>>>
>>> (we were thinking about nice classes, e.g. VolumeGroup with
>>> methods to extend/destroy/examine a volume group, but it would
>>> end up in duplicating the API we already have. We also assume
>>> that our users are not familiar with OOP).
> 
>> Define "our users". Are they admins that will use the scriptons
>> from python scripts? If yes, I think that degrading the CIM API
>> from OOP to pure procedural just doesn't sound right. Or maybe I'm
>> just misunderstanding something here.
> 
> 
> Most admins are not really familiar with object-oriented programming.
> The largest set of admins we're targeting tend towards bash scripting
> with command-line tools. We want to capture that group and encourage
> them to use OpenLMI.
> 
> By making the calls useful and procedural, we can get them to start
> using OpenLMI. We're not changing the underlying OO API underneath.
> Once people are using our interface, they will always have the option
> of extending their usage to call the low-level OpenLMI object-oriented
> functions.
> 
> The point of the lmishell is to be *very* easy for admins to use.
> Object-oriented programming is (perceived to be) hard and will scare
> away a fair number of admins.
> 

Ok, thank you for clarifying.

> 
>>>
>>> - these python functions try to hide the object model - we assume
>>> that administrators won't remember association names and won't
>>> use e.g. 
>>> vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to
>>> get list of physical volumes of a vg. We want nice vg_get_pvs(vg)
>>> function. We will expose CIM classes and properties though.
>>>
>>> - these python functions are synchronous, i.e. they do stuff and
>>> return once the stuff is finished. They can do stuff in parallel
>>> inside (e.g. format multiple devices simultaneously) but from
>>> outside perspective, the stuff is completed once the function
>>> returns.
> 
>> What about leaving an option for functions that are asynchronous to
>> run asynchronously? Or create both async and sync version of such
>> functions. Because, again, forcing something that can be run
>> asynchronously into a synchronous mode is degrading what we already
>> have, needlessly. IMO we shouldn't make things more simple than
>> simple.
> 
> 
> Again, the point here is to simplify the interface into something that
> admins are comfortable with. Some of them will understand async
> processing, but most won't. In order for us to have an async
> interface, we'll need to provide a set of job-processing tools to wait
> for results and we'd have to train admins to know when to block and
> wait (or how to write a mainloop and do full async processing). Our
> view was that this was *far* too complicated for the average user (and
> as we went down the path of trying to figure out how to make it
> easier, we hit so many edge-cases that it became clear that providing
> async needs to be at earliest a "2.0" feature).
> 
> Remember again that what we're trying to do here is capture admins
> whose usual behavior is to just call command-line applications and
> wait for their return. This is little different from their
> perspective. Async is a difficult problem to solve, and while there
> are obvious performance gains to being able to run some activities in
> parallel, it introduces the possibility of race-conditions and other
> concurrency bugs.
> 
> 
>>>
>>> (we were thinking about python functions just scheduling
>>> multiple actions and do stuff in parallel massively, but we
>>> quickly got into lot of corner cases)
>>>
>>> - each high-level function takes a LmiNamespace parameter, which 
>>> specifies the WBEM connection + the namespace on which it
>>> operates -> i.e. applications/other scripts can run our functions
>>> on multiple connections -> if the LmiNamespace is not provided by
>>> caller, some 'global' one will be used (so users just connect
>>> once and this connection is then used for all high-level
>>> functions)
> 
>> Having two extra parameters for each function sounds like a huge
>> API bloat. I think that having some 'global' one, i.e. some kind of
>> a state object that the underlying layer (lmishell?) would use,
>> would be better.
> 
> 
> There's only one parameter, namespace (which encompasses both the
> connection and namespace on which it operates). There will effectively
> be a global object that will save the state. The idea is that when we
> create a connection, we'll set the global variable internally. If you
> create multiple connections, the last one created will be the default.
> 
> Then, if you want to run a routine for a connection *other* than the
> default, you will need to specify the namespace parameter.

Hmm, I don't think that we really have to pollute the high-level API with this
one parameter.

Currently, lmishell doesn't have any internal knowledge of its active
connections. What if lmishell established an internal object for every connect()
that is called and kept these connection objects in a list, for example? Maybe
it would even make sense to have something like an iterator that would point to
the currently selected connection, so it can be used as a default for all the
high-level calls.

Then, lmishell could also be extended to define something like

def set_global_state(...):
  # changes the global state
  ....

def get_global_state():
  return _currently_selected_connection

Or perhaps, if it makes sense to have multiple selected connections, those
functions could operate with lists/tuples. We could then define our high-level
functions like so:

def create_mount(device, mountpoint, options=None, flags=None):
    c = get_global_state()
    # use c here to do all the lowlevel stuff
    # ...
    pass

Do this sound reasonable? I may be repeating something that has already been
written here, but I wanted to be explicit about it.

> 
> So for the majority of cases, this argument will simply be left out.
> 
> 
>>>
>>> - we should probably split these high-level function to several
>>> modules by functionality, i.e. have lmi.networking and
>>> lmi.storage.vg, lmi.storage.lv etc.
>>>
>>> - it should be easy to build command-line versions for these
>>> high-level functions -> it is not clear if we should mimic
>>> existing cmdline tools (mdadm, vgcreate, ip, ...) or make some
>>> cleanup (so creation of MD raid looks the same like creation of a
>>> VG)
>>>
>>> - we should introduce some 'lmi' metacommand, which would wrap
>>> these command line tools, like 'lmi vgcreate mygroup /dev/sda1
>>> /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg
>>> or koji command line utilities.
>>>
>>> - 'lmi' metacommand could also have a shell: $ lmi shell
>>>> vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
> 
>> I would go with the metacommand style (ala virsh).
> 
> I'm in favor of the metacommand style as well. As Jan and I discussed
> that day, much of the point of lmishell is going to be to reduce the
> number of *different* commands an admin needs to learn. Thus,
> duplicating the existing commands would go against that effort.
> 
> 
>>>
>>> I tried to create a simple module for volume group management. I
>>> ran into several issues with lmishell (see trac tickets),
>>> attached you can find first proposal. It's quite crude and misses
>>> several important aspects like proper logging and error
>>> handling.
>>>
>>> Please look at it and let us know what you think. It is just a
>>> proposal, we can change it in any way.
>>>
>>> Once we agree on the concept, we must also define strict
>>> documentation and logging standards so all functions and scripts
>>> are nicely documented and all of them provide the same user
>>> experience.
>>>
>>> Jan
>>>
>>> P.S.: note that I'm out of office for next week and with sporadic
>>> email access this week.
> 
>> As for the logging, maybe use something similar to the logging
>> decorators we now use in openlmi-storage? They would tell the
>> lmishell (which I suppose would be used as an 'interpreter' for the
>> scriptons) if it should log somehow or not. That would make it
>> easier to create a centralized logging policy/style/output.
> 

-- 
Jan Synacek
Software Engineer, Red Hat


More information about the openlmi-devel mailing list