On Sat, Jan 07, 2017 at 12:48:22PM +0100, Jiri Pirko wrote:
Tue, Nov 29, 2016 at 04:45:41PM CET, olichtne@redhat.com wrote:
Update after todays meeting, contains some changes and some notes to think about, probably best to just diff it against the previous version...
I'm a bit stolen from LNST lately, so please take that into consideration while reading my comments :) I'll try to look not that dumb :)
Hi, so I kind of went ahead with a prototype implemtation for the current version of this spec document (so we can try it out and see if anything needs to be changed). I wanted to send a "dirty" patch before christmas, so everyone could take a look at how it looks like, but decided against it. I decided to rework the internals of launching jobs/processes on the slave so it fits more nicely with the proposed API and it was still in a broken state before my PTO.
I'll send a full status update on Friday, possibly with a new version of the spec doc (based on provided input/what I've found during implementation) and possibly with a "dirty" patch for experimentation.
===============================================================================
- test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, BaseType): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType(mandatory=True) param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#in Task: import lnst #module lnst.tests will dynamically look for module classes in configured #locations, similar to how we do it now
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
I would still like to have one-shot possibility. Most of the time, this is 2 phase thing. You create test module instance and you call run. I would like to have possibility to just run. Just a thought...
It is of course possible to skip the first assignment, though I'm not sure that's an "elegant" approach:
m1.run(lnst.modules.Ping(some_parameters))
or if you do this:
from lnst.modules import Ping
#then you can just: m1.run(Ping(some_parameters))
I'm not sure there's a better way, I'll try to think about it, I'm also open to ideas :).
================================================
- Tasks:
class BaseTask(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) if instance(val, Requirement): setattr(self.requirements, x, val)
self.params = object() for x in dir(self): val = getattr(self, x) if instance(val, ParamType): setattr(self.params, x, val)
def test(): raise Exception("Method test MUST be defined.")
class MyTask(lnst.BaseTask): #class-wide definition of requirements m1 = HostReq(arch="x86_64", ...) m1.if1 = IfaceReq(label="xyz", driver="mlx", ...)
#m1.params = IfaceReq() #m1.devs = IfaceReq() #raise Exception("param/devs name is a reserved keyword")
m2 = HostReq(param="val", ...) m2.if1 = IfaceReq(label="xyz", param="val", ...)
mtu = IntType(default=1500)
So I understand this IntType and the "ParamType*" used in the test module are the same classes? If not, I think they should, if possible.
The same class tree yes, that's the idea. As far as I know it shouldn't be a problem, unless I hit some unforseen issue during implementation.
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs interface_driver = kwargs["driver"] self.reqs.m1.if1.driver = interface_driver #adjust instance specific requirements self.reqs.m3 = HostReq(...)
def test(): self.matched.m1.run(Module) self.matched.m1.run("command")
Hmm, thinking about it, I probably like this first option better. Any thoughts about this?
#or def test(m1, m2, m3, m4,...):
Yeah, looks a bit messy...
We've disscussed this with you on a meeting preceding this version of the spec doc. We decided to keep it like this for now, in my current prototype implementation I have just the first one, adding the second one should be easy but we can leave it for now and decide to add it later (it's easier to add to an API that to remove from it).
m1.run(Module) m2.run("command") m1.params.arch == "x86_64"
#incorrect... def test(m2, test_machine1)
================================================
- Running Tasks:
my_test_script.py: #!/bin/python from MyTasks import MyTask from lnst import Controller
task_instance = MyTask(mtu="5000")
ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(task_instance)
chmod +x my_test_script.py ./my_test_script.py args
Hmm, who parses the args? Shouldn't you pass them to Controller or MyTask?
it's named MY_test_script for a reason... YOU parse your own arguments in however way you want to parse them. I think in this case LNST should simply act as a library - which is why you have to create a Controller and call run() on it yourself I'd say that opens a lot of possibilities to how you create simple or more complex test scripts - note that I said "test scripts" not a "recipe" or anything along those lines.
I have similar concern here as for the test modules you always do 2 step: ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(task_instance)
Can this be done in a nicer way? One shot?
not sure, like I've said, I want LNST Controller to be usable as a library. I think for One shots the lnst-ctl executable is a more fitting way of starting a test.
OR
lnst-ctl -d run MyTask.py -- mtu=8000 # looks for NAME class in the NAME.py file (MyTask in this case for which # the condition "isinstance(NAME, BaseTask)" must be True
# could also run for all classes in the file where "isinstance(x, BaseTask)" is # True. with the option to restrict to specific task class (or just run the # first one?)... lnst-ctl rewritten to do the same as manually running the
Not just the first one. Run all.
The problem with that is how to pass parameters to those classes? I can find all the Task classes, I can create instances for all of them, but how would the lnst-ctl CLI interface look like for parameters I use to create those Tasks?
One option is to just run everything with default parameters (but then there's a problem with parameters that don't have a default value). And if the user wants to specify parameters then he'll have to select one specific Task (by a class name) and then he'll be able to set parameters for that one class.
Does that make sense or do we want something else?
# task from it's own python script
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
That is fine. I hated aliases anyway. These are options per task. We can comeup with same standard options and define them somewhere, so they doen't get re-invented for every new task.
================================================
- Tester facing API, inside the test() method:
#!!!! breakpoints!!!
Host objects available in self.matched.selector_name:
class Host: #HostAPI??? name can change
No "API" in class name please! That is just horrible.
In my "prototype" implementation I think I have some API names here and there to distinguish them from the internal objects but I think I agree, I'll work on renaming them...
#attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86":
Calling this a "param" sounds odd...
"description" comes to mind since these are just values that describe the matched host.
How does "host.description.arch", "host.desc.arch", or other variation of the "description" sound?
# I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes # they should also be iterable by: # for param in host.params: params = object()
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.eth0.set_ip(...)
I think we need to do a list of any possible setter and getter, think about how to unify the name and behaviour of most.
# I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # to deconfigure+remove call m1.team0.remove() __devs = object()
# device iterator for the Host object: # for dev in host:
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # IGNORE FOR NOW # "tool" changes the current working directory to the directory of a # speficied test_tool before "what" is executed and changes back # after execution is finished. # !!!!!!! this is from the current API and i'm not yet sure how we # !!!!!!! want to handle those... so for now I'll keep it # IGNORE FOR NOW
Yeah, just remove this weirdness...
ack, this could be replaced with the file transfer methods and then working with the transfered file through normal run() means (e.g. passing the path as an argument to a module)
# "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # Returns a Job object # !! Exceptions?!?! # !!!THINK ABOUT THESE...
Any news in this area?
the "think about these" is AFAIK for the sysfs_set and procfs_set methods, I haven't looked at those yet... However I'm 100% sure I don't want them to work as a special cas of a "run" command like it was until now (both API and internally)
def __set(path, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sysfs_set(path, value): # check if path starts with "/sys"? self.__set(path, value)
def procfs_set(path, value): self.__set("/proc/"+path, value)
def send_file(ctl_path="", slave_path="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def recv_file(recv_path="", ctl_path="", recursive=False)
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable # !!! REMOVE
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # !!! predefine the params, assignment will set them, sync on set - wait # !!! on the slave for the set to finish and return the current status of # !!! everything else as well # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... #attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job, None for bash commands result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # "signalnum" the signal to be sent
def __str__(self): # for easy print m1.run(what) return stdout+stderr
Thanks for taking care of this!