Hi all,
for the past couple of weeks I've been going over the meeting recordings we've had wrt the new Python API of LNST. I've been collecting everything into a single file that I'm appending to this email. I'm sending it here so that everyone can join the discussion before the implementation itself begins. I'll warn you thougn... it's LONG :)
!!!NOTE it's not complete yet, I'm sending it now because we have an upstream meeting planned for later today, namely Device/Interface API is not complete.
The structure of the file is following:
1. commented pseudo code of how Test Modules will look like - they'll be instantiated on the Controller and send ad-hoc to the slave where they'll be executed --> no more synchronization on test start...
2. commented pseudo code of how Tasks will look like, they'll define both the network requirements and the test execution as well.
3. short rough idea of how the tests/recipes will be executed.
4. 1st version of the API "specification"/documentation. Here I tried to go through the current *API objects we currently have and make them more "Pythonic", thinking of how they'll be used from a Task. I tried writing it as class-method-attribute definitions with some documentation so hopefully it makes some sense... Like I've said before, Device/Interfaces are not complete so there's a lot missing there.
Please take a look and provide feedback. I'm sure there are other parts in addition to Device/Interface APIs that are missing something so I'll appreciate any help :).
================================================================================ new_api file:
1. test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, BaseType): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType() param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#in Task: import lnst #module lnst.modules will dynamically look for module classes in configured #locations, similar to how we do it now
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
================================================
2. Tasks:
class BaseTask(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) setattr(self.requirements, x, val)
def test(): raise Exception("Method test MUST be defined.")
class MyTask(lnst.BaseTask): #class-wide definition of requirements m1 = HostSel(param="val", ...) m1.if1 = IfaceSel(l2net="xyz", param="val", ...)
m2 = HostSel(param="val", ...) m2.if1 = IfaceSel(l2net="xyz", param="val", ...)
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs #adjust instance specific requirements self.requirements.m3 = HostSel(...)
def test(): self.matched.m1.run(Module) self.matched.m1.run("command") #or def test(m1, m2): m1.run(Module) m2.run("command")
================================================
3. Running Tasks:
from MyTasks import MyTask import lnst
task_instance = MyTask(params)
lnst(args) lnst.run(task_instance)
OR
lnst-ctl -d run MyTask.py -- task_params # looks for NAME class in the NAME.py file (MyTask in this case for which # the condition "isinstance(NAME, BaseTask)" must be True
# could also run for all classes in the file where "isinstance(x, BaseTask)" is # True. with the option to restrict to specific task class (or just run the # first one?)... lnst-ctl rewritten to do the same as manually running the # task from it's own python script
First do the second option - easier since we have this already, then refactor the controller to create the lnst controller for the first option.
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
================================================
4. Tester facing API, inside the test() method:
Host objects available in self.matched.selector_name:
class Host: #HostAPI??? name can change #attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86": # I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes params = object()
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.ifaces.eth0.set_ip(...) # I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.devs.new_team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # assignment of None? or del devs.new_team0 to deconfigure the device? devs = object()
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # "tool" changes the current working directory to the directory of a # speficied test_tool before "what" is executed and changes back # after execution is finished. # !!!!!!! this is from the current API and i'm not yet sure how we # !!!!!!! want to handle those... so for now I'll keep it # "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # Returns a Job object
def config(option, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sync_resources(srcpath="", dstpath="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def add_{bond, bridge,...}(params) # this is how we can currently dynamically create net devices on the # hosts. Even with the new assignment-based approach this could still, # be usefull, though the method would need to be dynamically created to # avoid useless work when adding a new netdev type. Something like: # add_device("name", "Type", params) which would then do # self.devs.name = TypeDevice(params) ??
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... #attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job, None for bash commands result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # "signalnum" the signal to be sent
Update after todays meeting, contains some changes and some notes to think about, probably best to just diff it against the previous version...
===============================================================================
1. test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, BaseType): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType(mandatory=True) param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#in Task: import lnst #module lnst.tests will dynamically look for module classes in configured #locations, similar to how we do it now
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
================================================
2. Tasks:
class BaseTask(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) if instance(val, Requirement): setattr(self.requirements, x, val)
self.params = object() for x in dir(self): val = getattr(self, x) if instance(val, ParamType): setattr(self.params, x, val)
def test(): raise Exception("Method test MUST be defined.")
class MyTask(lnst.BaseTask): #class-wide definition of requirements m1 = HostReq(arch="x86_64", ...) m1.if1 = IfaceReq(label="xyz", driver="mlx", ...)
#m1.params = IfaceReq() #m1.devs = IfaceReq() #raise Exception("param/devs name is a reserved keyword")
m2 = HostReq(param="val", ...) m2.if1 = IfaceReq(label="xyz", param="val", ...)
mtu = IntType(default=1500)
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs interface_driver = kwargs["driver"] self.reqs.m1.if1.driver = interface_driver #adjust instance specific requirements self.reqs.m3 = HostReq(...)
def test(): self.matched.m1.run(Module) self.matched.m1.run("command") #or def test(m1, m2, m3, m4,...): m1.run(Module) m2.run("command") m1.params.arch == "x86_64"
#incorrect... def test(m2, test_machine1)
================================================
3. Running Tasks:
my_test_script.py: #!/bin/python from MyTasks import MyTask from lnst import Controller
task_instance = MyTask(mtu="5000")
ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(task_instance)
chmod +x my_test_script.py ./my_test_script.py args
OR
lnst-ctl -d run MyTask.py -- mtu=8000 # looks for NAME class in the NAME.py file (MyTask in this case for which # the condition "isinstance(NAME, BaseTask)" must be True
# could also run for all classes in the file where "isinstance(x, BaseTask)" is # True. with the option to restrict to specific task class (or just run the # first one?)... lnst-ctl rewritten to do the same as manually running the # task from it's own python script
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
================================================
4. Tester facing API, inside the test() method:
#!!!! breakpoints!!!
Host objects available in self.matched.selector_name:
class Host: #HostAPI??? name can change #attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86": # I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes # they should also be iterable by: # for param in host.params: params = object()
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.eth0.set_ip(...) # I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # to deconfigure+remove call m1.team0.remove() __devs = object()
# device iterator for the Host object: # for dev in host:
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # IGNORE FOR NOW # "tool" changes the current working directory to the directory of a # speficied test_tool before "what" is executed and changes back # after execution is finished. # !!!!!!! this is from the current API and i'm not yet sure how we # !!!!!!! want to handle those... so for now I'll keep it # IGNORE FOR NOW # "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # Returns a Job object # !! Exceptions?!?!
# !!!THINK ABOUT THESE... def __set(path, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sysfs_set(path, value): # check if path starts with "/sys"? self.__set(path, value)
def procfs_set(path, value): self.__set("/proc/"+path, value)
def send_file(ctl_path="", slave_path="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def recv_file(recv_path="", ctl_path="", recursive=False)
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable # !!! REMOVE
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # !!! predefine the params, assignment will set them, sync on set - wait # !!! on the slave for the set to finish and return the current status of # !!! everything else as well # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... #attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job, None for bash commands result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # "signalnum" the signal to be sent
def __str__(self): # for easy print m1.run(what) return stdout+stderr
Tue, Nov 29, 2016 at 04:45:41PM CET, olichtne@redhat.com wrote:
Update after todays meeting, contains some changes and some notes to think about, probably best to just diff it against the previous version...
I'm a bit stolen from LNST lately, so please take that into consideration while reading my comments :) I'll try to look not that dumb :)
===============================================================================
- test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, BaseType): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType(mandatory=True) param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#in Task: import lnst #module lnst.tests will dynamically look for module classes in configured #locations, similar to how we do it now
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
I would still like to have one-shot possibility. Most of the time, this is 2 phase thing. You create test module instance and you call run. I would like to have possibility to just run. Just a thought...
================================================
- Tasks:
class BaseTask(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) if instance(val, Requirement): setattr(self.requirements, x, val)
self.params = object() for x in dir(self): val = getattr(self, x) if instance(val, ParamType): setattr(self.params, x, val)
def test(): raise Exception("Method test MUST be defined.")
class MyTask(lnst.BaseTask): #class-wide definition of requirements m1 = HostReq(arch="x86_64", ...) m1.if1 = IfaceReq(label="xyz", driver="mlx", ...)
#m1.params = IfaceReq() #m1.devs = IfaceReq() #raise Exception("param/devs name is a reserved keyword")
m2 = HostReq(param="val", ...) m2.if1 = IfaceReq(label="xyz", param="val", ...)
mtu = IntType(default=1500)
So I understand this IntType and the "ParamType*" used in the test module are the same classes? If not, I think they should, if possible.
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs interface_driver = kwargs["driver"] self.reqs.m1.if1.driver = interface_driver #adjust instance specific requirements self.reqs.m3 = HostReq(...)
def test(): self.matched.m1.run(Module) self.matched.m1.run("command")
Hmm, thinking about it, I probably like this first option better. Any thoughts about this?
#or def test(m1, m2, m3, m4,...):
Yeah, looks a bit messy...
m1.run(Module) m2.run("command") m1.params.arch == "x86_64"
#incorrect... def test(m2, test_machine1)
================================================
- Running Tasks:
my_test_script.py: #!/bin/python from MyTasks import MyTask from lnst import Controller
task_instance = MyTask(mtu="5000")
ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(task_instance)
chmod +x my_test_script.py ./my_test_script.py args
Hmm, who parses the args? Shouldn't you pass them to Controller or MyTask?
I have similar concern here as for the test modules you always do 2 step: ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(task_instance)
Can this be done in a nicer way? One shot?
OR
lnst-ctl -d run MyTask.py -- mtu=8000 # looks for NAME class in the NAME.py file (MyTask in this case for which # the condition "isinstance(NAME, BaseTask)" must be True
# could also run for all classes in the file where "isinstance(x, BaseTask)" is # True. with the option to restrict to specific task class (or just run the # first one?)... lnst-ctl rewritten to do the same as manually running the
Not just the first one. Run all.
# task from it's own python script
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
That is fine. I hated aliases anyway. These are options per task. We can comeup with same standard options and define them somewhere, so they doen't get re-invented for every new task.
================================================
- Tester facing API, inside the test() method:
#!!!! breakpoints!!!
Host objects available in self.matched.selector_name:
class Host: #HostAPI??? name can change
No "API" in class name please! That is just horrible.
#attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86":
Calling this a "param" sounds odd...
# I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes # they should also be iterable by: # for param in host.params: params = object()
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.eth0.set_ip(...)
I think we need to do a list of any possible setter and getter, think about how to unify the name and behaviour of most.
# I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # to deconfigure+remove call m1.team0.remove() __devs = object()
# device iterator for the Host object: # for dev in host:
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # IGNORE FOR NOW # "tool" changes the current working directory to the directory of a # speficied test_tool before "what" is executed and changes back # after execution is finished. # !!!!!!! this is from the current API and i'm not yet sure how we # !!!!!!! want to handle those... so for now I'll keep it # IGNORE FOR NOW
Yeah, just remove this weirdness...
# "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # Returns a Job object # !! Exceptions?!?! # !!!THINK ABOUT THESE...
Any news in this area?
def __set(path, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sysfs_set(path, value): # check if path starts with "/sys"? self.__set(path, value)
def procfs_set(path, value): self.__set("/proc/"+path, value)
def send_file(ctl_path="", slave_path="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def recv_file(recv_path="", ctl_path="", recursive=False)
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable # !!! REMOVE
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # !!! predefine the params, assignment will set them, sync on set - wait # !!! on the slave for the set to finish and return the current status of # !!! everything else as well # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... #attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job, None for bash commands result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # "signalnum" the signal to be sent
def __str__(self): # for easy print m1.run(what) return stdout+stderr
Thanks for taking care of this!
On Sat, Jan 07, 2017 at 12:48:22PM +0100, Jiri Pirko wrote:
Tue, Nov 29, 2016 at 04:45:41PM CET, olichtne@redhat.com wrote:
Update after todays meeting, contains some changes and some notes to think about, probably best to just diff it against the previous version...
I'm a bit stolen from LNST lately, so please take that into consideration while reading my comments :) I'll try to look not that dumb :)
Hi, so I kind of went ahead with a prototype implemtation for the current version of this spec document (so we can try it out and see if anything needs to be changed). I wanted to send a "dirty" patch before christmas, so everyone could take a look at how it looks like, but decided against it. I decided to rework the internals of launching jobs/processes on the slave so it fits more nicely with the proposed API and it was still in a broken state before my PTO.
I'll send a full status update on Friday, possibly with a new version of the spec doc (based on provided input/what I've found during implementation) and possibly with a "dirty" patch for experimentation.
===============================================================================
- test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, BaseType): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType(mandatory=True) param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#in Task: import lnst #module lnst.tests will dynamically look for module classes in configured #locations, similar to how we do it now
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
I would still like to have one-shot possibility. Most of the time, this is 2 phase thing. You create test module instance and you call run. I would like to have possibility to just run. Just a thought...
It is of course possible to skip the first assignment, though I'm not sure that's an "elegant" approach:
m1.run(lnst.modules.Ping(some_parameters))
or if you do this:
from lnst.modules import Ping
#then you can just: m1.run(Ping(some_parameters))
I'm not sure there's a better way, I'll try to think about it, I'm also open to ideas :).
================================================
- Tasks:
class BaseTask(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) if instance(val, Requirement): setattr(self.requirements, x, val)
self.params = object() for x in dir(self): val = getattr(self, x) if instance(val, ParamType): setattr(self.params, x, val)
def test(): raise Exception("Method test MUST be defined.")
class MyTask(lnst.BaseTask): #class-wide definition of requirements m1 = HostReq(arch="x86_64", ...) m1.if1 = IfaceReq(label="xyz", driver="mlx", ...)
#m1.params = IfaceReq() #m1.devs = IfaceReq() #raise Exception("param/devs name is a reserved keyword")
m2 = HostReq(param="val", ...) m2.if1 = IfaceReq(label="xyz", param="val", ...)
mtu = IntType(default=1500)
So I understand this IntType and the "ParamType*" used in the test module are the same classes? If not, I think they should, if possible.
The same class tree yes, that's the idea. As far as I know it shouldn't be a problem, unless I hit some unforseen issue during implementation.
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs interface_driver = kwargs["driver"] self.reqs.m1.if1.driver = interface_driver #adjust instance specific requirements self.reqs.m3 = HostReq(...)
def test(): self.matched.m1.run(Module) self.matched.m1.run("command")
Hmm, thinking about it, I probably like this first option better. Any thoughts about this?
#or def test(m1, m2, m3, m4,...):
Yeah, looks a bit messy...
We've disscussed this with you on a meeting preceding this version of the spec doc. We decided to keep it like this for now, in my current prototype implementation I have just the first one, adding the second one should be easy but we can leave it for now and decide to add it later (it's easier to add to an API that to remove from it).
m1.run(Module) m2.run("command") m1.params.arch == "x86_64"
#incorrect... def test(m2, test_machine1)
================================================
- Running Tasks:
my_test_script.py: #!/bin/python from MyTasks import MyTask from lnst import Controller
task_instance = MyTask(mtu="5000")
ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(task_instance)
chmod +x my_test_script.py ./my_test_script.py args
Hmm, who parses the args? Shouldn't you pass them to Controller or MyTask?
it's named MY_test_script for a reason... YOU parse your own arguments in however way you want to parse them. I think in this case LNST should simply act as a library - which is why you have to create a Controller and call run() on it yourself I'd say that opens a lot of possibilities to how you create simple or more complex test scripts - note that I said "test scripts" not a "recipe" or anything along those lines.
I have similar concern here as for the test modules you always do 2 step: ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(task_instance)
Can this be done in a nicer way? One shot?
not sure, like I've said, I want LNST Controller to be usable as a library. I think for One shots the lnst-ctl executable is a more fitting way of starting a test.
OR
lnst-ctl -d run MyTask.py -- mtu=8000 # looks for NAME class in the NAME.py file (MyTask in this case for which # the condition "isinstance(NAME, BaseTask)" must be True
# could also run for all classes in the file where "isinstance(x, BaseTask)" is # True. with the option to restrict to specific task class (or just run the # first one?)... lnst-ctl rewritten to do the same as manually running the
Not just the first one. Run all.
The problem with that is how to pass parameters to those classes? I can find all the Task classes, I can create instances for all of them, but how would the lnst-ctl CLI interface look like for parameters I use to create those Tasks?
One option is to just run everything with default parameters (but then there's a problem with parameters that don't have a default value). And if the user wants to specify parameters then he'll have to select one specific Task (by a class name) and then he'll be able to set parameters for that one class.
Does that make sense or do we want something else?
# task from it's own python script
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
That is fine. I hated aliases anyway. These are options per task. We can comeup with same standard options and define them somewhere, so they doen't get re-invented for every new task.
================================================
- Tester facing API, inside the test() method:
#!!!! breakpoints!!!
Host objects available in self.matched.selector_name:
class Host: #HostAPI??? name can change
No "API" in class name please! That is just horrible.
In my "prototype" implementation I think I have some API names here and there to distinguish them from the internal objects but I think I agree, I'll work on renaming them...
#attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86":
Calling this a "param" sounds odd...
"description" comes to mind since these are just values that describe the matched host.
How does "host.description.arch", "host.desc.arch", or other variation of the "description" sound?
# I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes # they should also be iterable by: # for param in host.params: params = object()
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.eth0.set_ip(...)
I think we need to do a list of any possible setter and getter, think about how to unify the name and behaviour of most.
# I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # to deconfigure+remove call m1.team0.remove() __devs = object()
# device iterator for the Host object: # for dev in host:
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # IGNORE FOR NOW # "tool" changes the current working directory to the directory of a # speficied test_tool before "what" is executed and changes back # after execution is finished. # !!!!!!! this is from the current API and i'm not yet sure how we # !!!!!!! want to handle those... so for now I'll keep it # IGNORE FOR NOW
Yeah, just remove this weirdness...
ack, this could be replaced with the file transfer methods and then working with the transfered file through normal run() means (e.g. passing the path as an argument to a module)
# "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # Returns a Job object # !! Exceptions?!?! # !!!THINK ABOUT THESE...
Any news in this area?
the "think about these" is AFAIK for the sysfs_set and procfs_set methods, I haven't looked at those yet... However I'm 100% sure I don't want them to work as a special cas of a "run" command like it was until now (both API and internally)
def __set(path, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sysfs_set(path, value): # check if path starts with "/sys"? self.__set(path, value)
def procfs_set(path, value): self.__set("/proc/"+path, value)
def send_file(ctl_path="", slave_path="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def recv_file(recv_path="", ctl_path="", recursive=False)
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable # !!! REMOVE
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # !!! predefine the params, assignment will set them, sync on set - wait # !!! on the slave for the set to finish and return the current status of # !!! everything else as well # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... #attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job, None for bash commands result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # "signalnum" the signal to be sent
def __str__(self): # for easy print m1.run(what) return stdout+stderr
Thanks for taking care of this!
Mon, Jan 09, 2017 at 11:48:06AM CET, olichtne@redhat.com wrote:
On Sat, Jan 07, 2017 at 12:48:22PM +0100, Jiri Pirko wrote:
Tue, Nov 29, 2016 at 04:45:41PM CET, olichtne@redhat.com wrote:
Update after todays meeting, contains some changes and some notes to think about, probably best to just diff it against the previous version...
I'm a bit stolen from LNST lately, so please take that into consideration while reading my comments :) I'll try to look not that dumb :)
Hi, so I kind of went ahead with a prototype implemtation for the current version of this spec document (so we can try it out and see if anything needs to be changed). I wanted to send a "dirty" patch before christmas, so everyone could take a look at how it looks like, but decided against it. I decided to rework the internals of launching jobs/processes on the slave so it fits more nicely with the proposed API and it was still in a broken state before my PTO.
I'll send a full status update on Friday, possibly with a new version of the spec doc (based on provided input/what I've found during implementation) and possibly with a "dirty" patch for experimentation.
I would be definitelly interested in a demo. Always better to actually see the thing :)
===============================================================================
- test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, BaseType): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType(mandatory=True) param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#in Task: import lnst #module lnst.tests will dynamically look for module classes in configured #locations, similar to how we do it now
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
I would still like to have one-shot possibility. Most of the time, this is 2 phase thing. You create test module instance and you call run. I would like to have possibility to just run. Just a thought...
It is of course possible to skip the first assignment, though I'm not sure that's an "elegant" approach:
m1.run(lnst.modules.Ping(some_parameters))
or if you do this:
from lnst.modules import Ping
#then you can just: m1.run(Ping(some_parameters))
I'm not sure there's a better way, I'll try to think about it, I'm also open to ideas :).
================================================
- Tasks:
class BaseTask(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) if instance(val, Requirement): setattr(self.requirements, x, val)
self.params = object() for x in dir(self): val = getattr(self, x) if instance(val, ParamType): setattr(self.params, x, val)
def test(): raise Exception("Method test MUST be defined.")
class MyTask(lnst.BaseTask): #class-wide definition of requirements m1 = HostReq(arch="x86_64", ...) m1.if1 = IfaceReq(label="xyz", driver="mlx", ...)
#m1.params = IfaceReq() #m1.devs = IfaceReq() #raise Exception("param/devs name is a reserved keyword")
m2 = HostReq(param="val", ...) m2.if1 = IfaceReq(label="xyz", param="val", ...)
mtu = IntType(default=1500)
So I understand this IntType and the "ParamType*" used in the test module are the same classes? If not, I think they should, if possible.
The same class tree yes, that's the idea. As far as I know it shouldn't be a problem, unless I hit some unforseen issue during implementation.
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs interface_driver = kwargs["driver"] self.reqs.m1.if1.driver = interface_driver #adjust instance specific requirements self.reqs.m3 = HostReq(...)
def test(): self.matched.m1.run(Module) self.matched.m1.run("command")
Hmm, thinking about it, I probably like this first option better. Any thoughts about this?
#or def test(m1, m2, m3, m4,...):
Yeah, looks a bit messy...
We've disscussed this with you on a meeting preceding this version of the spec doc. We decided to keep it like this for now, in my current prototype implementation I have just the first one, adding the second one should be easy but we can leave it for now and decide to add it later (it's easier to add to an API that to remove from it).
m1.run(Module) m2.run("command") m1.params.arch == "x86_64"
#incorrect... def test(m2, test_machine1)
================================================
- Running Tasks:
my_test_script.py: #!/bin/python from MyTasks import MyTask from lnst import Controller
task_instance = MyTask(mtu="5000")
ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(task_instance)
chmod +x my_test_script.py ./my_test_script.py args
Hmm, who parses the args? Shouldn't you pass them to Controller or MyTask?
it's named MY_test_script for a reason... YOU parse your own arguments in however way you want to parse them. I think in this case LNST should simply act as a library - which is why you have to create a Controller and call run() on it yourself I'd say that opens a lot of possibilities to how you create simple or more complex test scripts - note that I said "test scripts" not a "recipe" or anything along those lines.
I have similar concern here as for the test modules you always do 2 step: ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(task_instance)
Can this be done in a nicer way? One shot?
not sure, like I've said, I want LNST Controller to be usable as a library. I think for One shots the lnst-ctl executable is a more fitting way of starting a test.
OR
lnst-ctl -d run MyTask.py -- mtu=8000 # looks for NAME class in the NAME.py file (MyTask in this case for which # the condition "isinstance(NAME, BaseTask)" must be True
# could also run for all classes in the file where "isinstance(x, BaseTask)" is # True. with the option to restrict to specific task class (or just run the # first one?)... lnst-ctl rewritten to do the same as manually running the
Not just the first one. Run all.
The problem with that is how to pass parameters to those classes? I can find all the Task classes, I can create instances for all of them, but how would the lnst-ctl CLI interface look like for parameters I use to create those Tasks?
One option is to just run everything with default parameters (but then there's a problem with parameters that don't have a default value). And if the user wants to specify parameters then he'll have to select one specific Task (by a class name) and then he'll be able to set parameters for that one class.
Does that make sense or do we want something else?
# task from it's own python script
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
That is fine. I hated aliases anyway. These are options per task. We can comeup with same standard options and define them somewhere, so they doen't get re-invented for every new task.
================================================
- Tester facing API, inside the test() method:
#!!!! breakpoints!!!
Host objects available in self.matched.selector_name:
class Host: #HostAPI??? name can change
No "API" in class name please! That is just horrible.
In my "prototype" implementation I think I have some API names here and there to distinguish them from the internal objects but I think I agree, I'll work on renaming them...
#attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86":
Calling this a "param" sounds odd...
"description" comes to mind since these are just values that describe the matched host.
How does "host.description.arch", "host.desc.arch", or other variation of the "description" sound?
# I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes # they should also be iterable by: # for param in host.params: params = object()
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.eth0.set_ip(...)
I think we need to do a list of any possible setter and getter, think about how to unify the name and behaviour of most.
# I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # to deconfigure+remove call m1.team0.remove() __devs = object()
# device iterator for the Host object: # for dev in host:
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # IGNORE FOR NOW # "tool" changes the current working directory to the directory of a # speficied test_tool before "what" is executed and changes back # after execution is finished. # !!!!!!! this is from the current API and i'm not yet sure how we # !!!!!!! want to handle those... so for now I'll keep it # IGNORE FOR NOW
Yeah, just remove this weirdness...
ack, this could be replaced with the file transfer methods and then working with the transfered file through normal run() means (e.g. passing the path as an argument to a module)
# "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # Returns a Job object # !! Exceptions?!?! # !!!THINK ABOUT THESE...
Any news in this area?
the "think about these" is AFAIK for the sysfs_set and procfs_set methods, I haven't looked at those yet... However I'm 100% sure I don't want them to work as a special cas of a "run" command like it was until now (both API and internally)
def __set(path, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sysfs_set(path, value): # check if path starts with "/sys"? self.__set(path, value)
def procfs_set(path, value): self.__set("/proc/"+path, value)
def send_file(ctl_path="", slave_path="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def recv_file(recv_path="", ctl_path="", recursive=False)
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable # !!! REMOVE
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # !!! predefine the params, assignment will set them, sync on set - wait # !!! on the slave for the set to finish and return the current status of # !!! everything else as well # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... #attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job, None for bash commands result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # "signalnum" the signal to be sent
def __str__(self): # for easy print m1.run(what) return stdout+stderr
Thanks for taking care of this!
Hi everyone,
I've got a new version of the API spec file.
what changed: * I've renamed some stuff - Task -> Recipe (since the class handles both requirements and the test execution, calling it task makes no sense to me * commented out the "def test(m1, m2, m3...)" method of the BaseRecipe class, based on upstream discussion this might be confusing, hard to implement and if we can always decide later to add it (removing would be more problematic) * Added 'Controller' to the Tester facing API - provided by the LNST library to enable a tester to use the LNST controller from his own executable script * Host - added some discussion about the name "params" - removed 'tool' argument from run(), kept the 'path' argument because i didn't know if jpirko meant to remove them both or just the second one * Job - expanded basic description - how all Jobs will be technically in background, and fg/bg handling will be on the Controller. * added proposal for changing how the Result summary will look like
I still haven't started working on the Device API due to reworking how Hosts will be matched and allocated (this is related to properly creating the Device objects...)
Attaching the doc here:
1. test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, Param): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
#check mandatory parameters for name, param in self.params: if param.mandatory and not param.set: raise TestModuleException("Parameter {} is mandatory".format(name))
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType(mandatory=True) param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#all imports used by the test module need to happen HERE, reason is #that the object is instantiated on the Controller and THEN sent to the #slave -> stuff imported on the Controller is not available on the Slave
#in Task: import lnst #module lnst.tests will dynamically look for module classes in configured #locations, similar to how we do it now #or you can import BaseTestmodule and define your test module directly in your recipe
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
================================================
2. Recipe: Renamed Task to Recipe... since the class represents the "whole" LNST workflow of setting requirements and defining the test
class BaseRecipe(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) if instance(val, Requirement): setattr(self.requirements, x, val)
# check types of parameters, the Param class hierarchy is the same as # the one used by BaseTestModule self.params = object() for x in dir(self): val = getattr(self, x) if instance(val, Param): setattr(self.params, x, val)
# check mandatory parameters for name, param in self.params: if param.mandatory and not param.set: raise RecipeException("Parameter {} is mandatory".format(name))
def test(): raise Exception("Method test MUST be defined.")
class MyRecipe(lnst.BaseRecipe): #class-wide definition of requirements m1 = HostReq(arch="x86_64", ...) m1.if1 = IfaceReq(label="xyz", driver="mlx", ...)
#m1.params = IfaceReq() #m1.devs = IfaceReq() #raise Exception("param/devs name is a reserved keyword")
m2 = HostReq(param="val", ...) m2.if1 = IfaceReq(label="xyz", param="val", ...)
# idosch requested the ability to "blacklist" parameters, jpirko's # suggestion was to use regular expressions in the mapper. This should be # easy to do, though it's not entirely "pretty" since regexes are not very # good at doing negative conditions... maybe we can think of something # better?
#class-wide definition of Task parameters, with type checking, #possibly even type conversions (e.g. string to int), using the same #*Type classes as with Module parameters mtu = IntType(default=1500)
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs interface_driver = kwargs["driver"] self.reqs.m1.if1.driver = interface_driver #adjust instance specific requirements self.reqs.m3 = HostReq(...)
def test(self): self.matched.m1.run(Module) self.matched.m1.run("command")
# optionally we can add support for something like this, but at the moment # we think it could be confusing and/or complicated to do right #def test(m1, m2, m3, m4,...): # m1.run(Module) # m2.run("command") # m1.params.arch == "x86_64"
================================================
3. Running Recipes:
my_test_script.py: #!/bin/python from MyRecipes import MyRecipe from lnst import Controller
recipe_instance = MyRecipe(mtu="5000")
ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(recipe_instance)
chmod +x my_test_script.py ./my_test_script.py args
OR
lnst-ctl -d run MyRecipe.py -- mtu=8000 # looks for NAME class in the NAME.py file (MyRecipe in this case for which # the condition "isinstance(NAME, BaseRecipe)" must be True
# could also run for all classes in the file where "isinstance(x, BaseRecipe)" # is True. with the option to restrict to specific task class... lnst-ctl # rewritten to do the same as manually running the task from it's own python # script
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
================================================
4. Tester facing API:
#!!!! breakpoints!!!
class Controller: # This is the class which serves as an entry point to using LNST in a # library-like way to run a controller - as in running a Recipe from it's # own executable python script" # It's partially based on the old NetTestController class
def __init__(self, debug=0, mapper=MachineMapper, pools=[], pool_checks=True): # defines the logging level self._debug = debug self._mac_pool = MacPool() # class controlling logging self._log_ctl = LoggingCtl(debug, ...) # dictionary of dynamically created networks (libvirt) self._network_bridges = {}
# a ConnectionHandler for communicating with slaves self._msg_dispatcher = MessageDispatcher(self._log_ctl)
# a Mapper class - handling the matching of requirements to the Hosts # available in pools, you can define your own class that supports the # API (specified later) to do your own matching API, the basic idea is # for the Mapper to get requirements in a certain format and the pools # in a certain format and to generate mappings between these two self._mapper = mapper()
# pool loading - from a config file, and optionaly restricted from the # pools argument, functionality copied from the NetTestController, I'm # thinking this could change (pools default value is grabbed from # config, so that the user can override them completely?) select_pools = {} conf_pools = ctl_config.get_pools() if len(pools) > 0: for pool_name in pools: if pool_name in conf_pools: select_pools[pool_name] = conf_pools[pool_name] elif len(pools) == 1 and os.path.isdir(pool_name): select_pools = {"cmd_line_pool": pool_name} else: raise NetTestError("Pool %s does not exist!" % pool_name) else: select_pools = conf_pools
# a PoolManager, that loads the slave description XML files and checks # the availability of test hosts. Defines an API to access the available # pools usable by the Mapper to match against requirements. I'm thinking # this could also be made an argument of the __init__ method when the # API is defined, the users could then replace both the Mapper and the # PoolManager... although I don't think it would be used very much... self._pools = SlavePoolManager(select_pools, pool_checks)
def run(self, recipe, **mapper_kwargs): # this method implements the main loop of the recipe execution, the # mapper_kwargs are arguments passed to the mapper to influence the # mapping process, such as to try all matches, or to enable virtual # matches and so on
for match in self._mapper.matches(**mapper_kwargs): matched = True # uses the current match to map the Machine objects from the # PoolManager to create Host objects for the Recipe self._map_match(match) self._print_match_description(match) try: recipe.test() except Exception as exc: logging.error("Recipe execution terminated by unexpected exception") raise finally: for machine in self._machines.values(): machine.restore_system_config() self._cleanup_slaves()
Host objects available in self.matched.selector_name:
class Host: #name can change #attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86": # I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes # they should also be iterable by: # for param in host.params: params = object()
!!! params sounds odd? other possibilities 'property', 'description'? !!! m1.property.arch == "x86" ? !!! m1.description.arch == "x86" ? !!! we've also discussed that it would probably be easiest to just have a !!! predefined static set of these attributes that are checked on !!! lnst-slave startup and are sent to the Controller on connection.
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.eth0.set_ip(...) # I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # to deconfigure+remove call m1.team0.remove() __devs = object()
# device iterator for the Host object: # for dev in host:
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns, desc) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # IGNORE FOR NOW # "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # "desc" is a description message that will show up in logs and the # result summary # Returns a Job object # !! Exceptions?!?!
# !!!THINK ABOUT THESE... def __set(path, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sysfs_set(path, value): # check if path starts with "/sys"? self.__set(path, value)
def procfs_set(path, value): self.__set("/proc/"+path, value)
def send_file(ctl_path="", slave_path="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def recv_file(recv_path="", ctl_path="", recursive=False)
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable # !!! REMOVE
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # !!! predefine the params, assignment will set them, sync on set - wait # !!! on the slave for the set to finish and return the current status of # !!! everything else as well # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... # The way Jobs are launched is different from how we handled them with XML # recipes -> on Slaves, ALL jobs are run in a separate process ("in # background"), without a timeout, and they all have an id (previously # foreground commands had id == None) # # The distinction of foreground/background Jobs is handled by the Hosts # "run" method -> for background Jobs, the method immediately returns with # the Job handle (instance of this class), for foreground Jobs, the method # calls the wait method of this class (default timeout) and if it times out, # it calls the kill method of this class (SIGKILL) # # For now, I'm just considering Module and "Shell command" Jobs, system # configs will probably be handled in a different way... # # Job results can be picked up at any time the execution is handed over to # the LNST library - the message carrying results from the slave will be # handled by the connection handler and sent to the correspoding Job # object. This is relevant for background Jobs only as foreground Jobs will # always be in a finished state after the Host run method returns
#attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # in case the Job finished while this method call is executed then the # job just finishes normally and the signal isn't sent # "signalnum" the signal to be sent, default is SIGKILL
def __str__(self): # for easy print m1.run(what) return stdout+stderr # what about Modules?
================================================
5. Result summary proposal
Since I've changed how Job execution is handled, I've also wrote down a proposal to change how we log Recipe results - the RESULTS SUMMARY logs at the end of a recipe run. I haven't started working on it yet, I've just wrote an example on paper which I'm copying here. Any comments are appreciated.
RESULTS SUMMARY: Host m1 Job 1 XYZ PASS/FAIL Formatted results: ... Host m2 Job 1 XYZ started Host m1 Job 3 XYZ PASS/FAIL Formatted results: ... Host m2 Job 1 XYZ PASS/FAIL Formatted results: ... Custom summary record.... (optional PASS/FAIL) ... optional additional data ... i still need to figure out how this will look like
The main difference to the old results summary is that Jobs have numerical ids that are unique per host, and you ALWAYS see the id (previously only background commands had ids). Since all Jobs "run in background" this will make matching "started" "finished" logs easier. There also won't be any more "kill cmd" "intr cmd" logs here since these commands don't exist anymore.
Since "all Jobs are in background" it means that in reality all of them generate a "started" and "finished" log, however, if these are in a direct sequence after each other they get shortened to just the PASS/FAIL log. This will also be true for background commands if there were no results to report between their start and finish.
Mon, Feb 06, 2017 at 09:52:57AM CET, olichtne@redhat.com wrote:
Hi everyone,
I've got a new version of the API spec file.
what changed:
- I've renamed some stuff - Task -> Recipe (since the class handles both
requirements and the test execution, calling it task makes no sense to me
- commented out the "def test(m1, m2, m3...)" method of the BaseRecipe
class, based on upstream discussion this might be confusing, hard to implement and if we can always decide later to add it (removing would be more problematic)
- Added 'Controller' to the Tester facing API - provided by the LNST
library to enable a tester to use the LNST controller from his own executable script
- Host - added some discussion about the name "params" - removed 'tool' argument from run(), kept the 'path' argument because i didn't know if jpirko meant to remove them both or just the second one
- Job - expanded basic description - how all Jobs will be technically in background, and fg/bg handling will be on the Controller.
- added proposal for changing how the Result summary will look like
I still haven't started working on the Device API due to reworking how Hosts will be matched and allocated (this is related to properly creating the Device objects...)
Attaching the doc here:
- test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, Param): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
#check mandatory parameters for name, param in self.params: if param.mandatory and not param.set: raise TestModuleException("Parameter {} is mandatory".format(name))
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType(mandatory=True) param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#all imports used by the test module need to happen HERE, reason is #that the object is instantiated on the Controller and THEN sent to the #slave -> stuff imported on the Controller is not available on the Slave
#in Task: import lnst #module lnst.tests will dynamically look for module classes in configured #locations, similar to how we do it now #or you can import BaseTestmodule and define your test module directly in your recipe
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
================================================
- Recipe:
Renamed Task to Recipe... since the class represents the "whole" LNST workflow of setting requirements and defining the test
class BaseRecipe(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) if instance(val, Requirement): setattr(self.requirements, x, val)
# check types of parameters, the Param class hierarchy is the same as # the one used by BaseTestModule self.params = object() for x in dir(self): val = getattr(self, x) if instance(val, Param): setattr(self.params, x, val) # check mandatory parameters for name, param in self.params: if param.mandatory and not param.set: raise RecipeException("Parameter {} is mandatory".format(name))
def test(): raise Exception("Method test MUST be defined.")
class MyRecipe(lnst.BaseRecipe): #class-wide definition of requirements m1 = HostReq(arch="x86_64", ...) m1.if1 = IfaceReq(label="xyz", driver="mlx", ...)
#m1.params = IfaceReq() #m1.devs = IfaceReq() #raise Exception("param/devs name is a reserved keyword")
m2 = HostReq(param="val", ...) m2.if1 = IfaceReq(label="xyz", param="val", ...)
# idosch requested the ability to "blacklist" parameters, jpirko's # suggestion was to use regular expressions in the mapper. This should be # easy to do, though it's not entirely "pretty" since regexes are not very # good at doing negative conditions... maybe we can think of something # better?
#class-wide definition of Task parameters, with type checking, #possibly even type conversions (e.g. string to int), using the same #*Type classes as with Module parameters mtu = IntType(default=1500)
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs interface_driver = kwargs["driver"] self.reqs.m1.if1.driver = interface_driver #adjust instance specific requirements self.reqs.m3 = HostReq(...)
def test(self): self.matched.m1.run(Module) self.matched.m1.run("command")
# optionally we can add support for something like this, but at the moment # we think it could be confusing and/or complicated to do right #def test(m1, m2, m3, m4,...): # m1.run(Module) # m2.run("command") # m1.params.arch == "x86_64"
================================================
- Running Recipes:
my_test_script.py: #!/bin/python from MyRecipes import MyRecipe from lnst import Controller
recipe_instance = MyRecipe(mtu="5000")
ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(recipe_instance)
chmod +x my_test_script.py ./my_test_script.py args
OR
lnst-ctl -d run MyRecipe.py -- mtu=8000 # looks for NAME class in the NAME.py file (MyRecipe in this case for which # the condition "isinstance(NAME, BaseRecipe)" must be True
# could also run for all classes in the file where "isinstance(x, BaseRecipe)" # is True. with the option to restrict to specific task class... lnst-ctl # rewritten to do the same as manually running the task from it's own python # script
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
================================================
- Tester facing API:
#!!!! breakpoints!!!
class Controller: # This is the class which serves as an entry point to using LNST in a # library-like way to run a controller - as in running a Recipe from it's # own executable python script" # It's partially based on the old NetTestController class
def __init__(self, debug=0, mapper=MachineMapper, pools=[], pool_checks=True): # defines the logging level self._debug = debug self._mac_pool = MacPool() # class controlling logging self._log_ctl = LoggingCtl(debug, ...) # dictionary of dynamically created networks (libvirt) self._network_bridges = {}
# a ConnectionHandler for communicating with slaves self._msg_dispatcher = MessageDispatcher(self._log_ctl) # a Mapper class - handling the matching of requirements to the Hosts # available in pools, you can define your own class that supports the # API (specified later) to do your own matching API, the basic idea is # for the Mapper to get requirements in a certain format and the pools # in a certain format and to generate mappings between these two self._mapper = mapper() # pool loading - from a config file, and optionaly restricted from the # pools argument, functionality copied from the NetTestController, I'm # thinking this could change (pools default value is grabbed from # config, so that the user can override them completely?) select_pools = {} conf_pools = ctl_config.get_pools() if len(pools) > 0: for pool_name in pools: if pool_name in conf_pools: select_pools[pool_name] = conf_pools[pool_name] elif len(pools) == 1 and os.path.isdir(pool_name): select_pools = {"cmd_line_pool": pool_name} else: raise NetTestError("Pool %s does not exist!" % pool_name) else: select_pools = conf_pools # a PoolManager, that loads the slave description XML files and checks
Don't we want to convert those files from xml as well? Those are would be the last xmls, right? INI?
# the availability of test hosts. Defines an API to access the available # pools usable by the Mapper to match against requirements. I'm thinking # this could also be made an argument of the __init__ method when the # API is defined, the users could then replace both the Mapper and the # PoolManager... although I don't think it would be used very much... self._pools = SlavePoolManager(select_pools, pool_checks)
def run(self, recipe, **mapper_kwargs): # this method implements the main loop of the recipe execution, the # mapper_kwargs are arguments passed to the mapper to influence the # mapping process, such as to try all matches, or to enable virtual # matches and so on
for match in self._mapper.matches(**mapper_kwargs): matched = True # uses the current match to map the Machine objects from the # PoolManager to create Host objects for the Recipe self._map_match(match) self._print_match_description(match) try: recipe.test() except Exception as exc: logging.error("Recipe execution terminated by unexpected exception") raise finally: for machine in self._machines.values(): machine.restore_system_config() self._cleanup_slaves()
Host objects available in self.matched.selector_name:
class Host: #name can change #attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86": # I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes # they should also be iterable by: # for param in host.params: params = object()
!!! params sounds odd? other possibilities 'property', 'description'? !!! m1.property.arch == "x86" ? !!! m1.description.arch == "x86" ?
"property" or "prop" or "props" sounds most correct to me.
!!! we've also discussed that it would probably be easiest to just have a !!! predefined static set of these attributes that are checked on !!! lnst-slave startup and are sent to the Controller on connection.
Ack.
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.eth0.set_ip(...) # I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # to deconfigure+remove call m1.team0.remove() __devs = object()
# device iterator for the Host object: # for dev in host:
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns, desc) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # IGNORE FOR NOW # "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # "desc" is a description message that will show up in logs and the # result summary # Returns a Job object # !! Exceptions?!?!
# !!!THINK ABOUT THESE...
def __set(path, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sysfs_set(path, value): # check if path starts with "/sys"? self.__set(path, value)
def procfs_set(path, value): self.__set("/proc/"+path, value)
def send_file(ctl_path="", slave_path="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def recv_file(recv_path="", ctl_path="", recursive=False)
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable # !!! REMOVE
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # !!! predefine the params, assignment will set them, sync on set - wait # !!! on the slave for the set to finish and return the current status of # !!! everything else as well # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... # The way Jobs are launched is different from how we handled them with XML # recipes -> on Slaves, ALL jobs are run in a separate process ("in # background"), without a timeout, and they all have an id (previously # foreground commands had id == None) # # The distinction of foreground/background Jobs is handled by the Hosts # "run" method -> for background Jobs, the method immediately returns with # the Job handle (instance of this class), for foreground Jobs, the method # calls the wait method of this class (default timeout) and if it times out, # it calls the kill method of this class (SIGKILL) # # For now, I'm just considering Module and "Shell command" Jobs, system # configs will probably be handled in a different way... # # Job results can be picked up at any time the execution is handed over to # the LNST library - the message carrying results from the slave will be # handled by the connection handler and sent to the correspoding Job # object. This is relevant for background Jobs only as foreground Jobs will # always be in a finished state after the Host run method returns
#attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # in case the Job finished while this method call is executed then the # job just finishes normally and the signal isn't sent # "signalnum" the signal to be sent, default is SIGKILL
def __str__(self): # for easy print m1.run(what) return stdout+stderr # what about Modules?
================================================
- Result summary proposal
Since I've changed how Job execution is handled, I've also wrote down a proposal to change how we log Recipe results - the RESULTS SUMMARY logs at the end of a recipe run. I haven't started working on it yet, I've just wrote an example on paper which I'm copying here. Any comments are appreciated.
RESULTS SUMMARY: Host m1 Job 1 XYZ PASS/FAIL Formatted results: ... Host m2 Job 1 XYZ started Host m1 Job 3 XYZ PASS/FAIL Formatted results: ... Host m2 Job 1 XYZ PASS/FAIL Formatted results: ... Custom summary record.... (optional PASS/FAIL) ... optional additional data ... i still need to figure out how this will look like
The main difference to the old results summary is that Jobs have numerical ids that are unique per host, and you ALWAYS see the id (previously only background commands had ids). Since all Jobs "run in background" this will make matching "started" "finished" logs easier. There also won't be any more "kill cmd" "intr cmd" logs here since these commands don't exist anymore.
Since "all Jobs are in background" it means that in reality all of them generate a "started" and "finished" log, however, if these are in a direct sequence after each other they get shortened to just the PASS/FAIL log. This will also be true for background commands if there were no results to report between their start and finish.
Looks great to me. Please try to hunt down jbenc to review this. Thanks!
Mon, Feb 06, 2017 at 09:52:57AM CET, olichtne@redhat.com wrote:
Hi everyone,
I've got a new version of the API spec file.
ccing Petr.
what changed:
- I've renamed some stuff - Task -> Recipe (since the class handles both
requirements and the test execution, calling it task makes no sense to me
- commented out the "def test(m1, m2, m3...)" method of the BaseRecipe
class, based on upstream discussion this might be confusing, hard to implement and if we can always decide later to add it (removing would be more problematic)
- Added 'Controller' to the Tester facing API - provided by the LNST
library to enable a tester to use the LNST controller from his own executable script
- Host - added some discussion about the name "params" - removed 'tool' argument from run(), kept the 'path' argument because i didn't know if jpirko meant to remove them both or just the second one
- Job - expanded basic description - how all Jobs will be technically in background, and fg/bg handling will be on the Controller.
- added proposal for changing how the Result summary will look like
I still haven't started working on the Device API due to reworking how Hosts will be matched and allocated (this is related to properly creating the Device objects...)
Attaching the doc here:
- test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, Param): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
#check mandatory parameters for name, param in self.params: if param.mandatory and not param.set: raise TestModuleException("Parameter {} is mandatory".format(name))
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType(mandatory=True) param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#all imports used by the test module need to happen HERE, reason is #that the object is instantiated on the Controller and THEN sent to the #slave -> stuff imported on the Controller is not available on the Slave
#in Task: import lnst #module lnst.tests will dynamically look for module classes in configured #locations, similar to how we do it now #or you can import BaseTestmodule and define your test module directly in your recipe
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
================================================
- Recipe:
Renamed Task to Recipe... since the class represents the "whole" LNST workflow of setting requirements and defining the test
class BaseRecipe(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) if instance(val, Requirement): setattr(self.requirements, x, val)
# check types of parameters, the Param class hierarchy is the same as # the one used by BaseTestModule self.params = object() for x in dir(self): val = getattr(self, x) if instance(val, Param): setattr(self.params, x, val) # check mandatory parameters for name, param in self.params: if param.mandatory and not param.set: raise RecipeException("Parameter {} is mandatory".format(name))
def test(): raise Exception("Method test MUST be defined.")
class MyRecipe(lnst.BaseRecipe): #class-wide definition of requirements m1 = HostReq(arch="x86_64", ...) m1.if1 = IfaceReq(label="xyz", driver="mlx", ...)
#m1.params = IfaceReq() #m1.devs = IfaceReq() #raise Exception("param/devs name is a reserved keyword")
m2 = HostReq(param="val", ...) m2.if1 = IfaceReq(label="xyz", param="val", ...)
# idosch requested the ability to "blacklist" parameters, jpirko's # suggestion was to use regular expressions in the mapper. This should be # easy to do, though it's not entirely "pretty" since regexes are not very # good at doing negative conditions... maybe we can think of something # better?
#class-wide definition of Task parameters, with type checking, #possibly even type conversions (e.g. string to int), using the same #*Type classes as with Module parameters mtu = IntType(default=1500)
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs interface_driver = kwargs["driver"] self.reqs.m1.if1.driver = interface_driver #adjust instance specific requirements self.reqs.m3 = HostReq(...)
def test(self): self.matched.m1.run(Module) self.matched.m1.run("command")
# optionally we can add support for something like this, but at the moment # we think it could be confusing and/or complicated to do right #def test(m1, m2, m3, m4,...): # m1.run(Module) # m2.run("command") # m1.params.arch == "x86_64"
================================================
- Running Recipes:
my_test_script.py: #!/bin/python from MyRecipes import MyRecipe from lnst import Controller
recipe_instance = MyRecipe(mtu="5000")
ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(recipe_instance)
chmod +x my_test_script.py ./my_test_script.py args
OR
lnst-ctl -d run MyRecipe.py -- mtu=8000 # looks for NAME class in the NAME.py file (MyRecipe in this case for which # the condition "isinstance(NAME, BaseRecipe)" must be True
# could also run for all classes in the file where "isinstance(x, BaseRecipe)" # is True. with the option to restrict to specific task class... lnst-ctl # rewritten to do the same as manually running the task from it's own python # script
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
================================================
- Tester facing API:
#!!!! breakpoints!!!
class Controller: # This is the class which serves as an entry point to using LNST in a # library-like way to run a controller - as in running a Recipe from it's # own executable python script" # It's partially based on the old NetTestController class
def __init__(self, debug=0, mapper=MachineMapper, pools=[], pool_checks=True): # defines the logging level self._debug = debug self._mac_pool = MacPool() # class controlling logging self._log_ctl = LoggingCtl(debug, ...) # dictionary of dynamically created networks (libvirt) self._network_bridges = {}
# a ConnectionHandler for communicating with slaves self._msg_dispatcher = MessageDispatcher(self._log_ctl) # a Mapper class - handling the matching of requirements to the Hosts # available in pools, you can define your own class that supports the # API (specified later) to do your own matching API, the basic idea is # for the Mapper to get requirements in a certain format and the pools # in a certain format and to generate mappings between these two self._mapper = mapper() # pool loading - from a config file, and optionaly restricted from the # pools argument, functionality copied from the NetTestController, I'm # thinking this could change (pools default value is grabbed from # config, so that the user can override them completely?) select_pools = {} conf_pools = ctl_config.get_pools() if len(pools) > 0: for pool_name in pools: if pool_name in conf_pools: select_pools[pool_name] = conf_pools[pool_name] elif len(pools) == 1 and os.path.isdir(pool_name): select_pools = {"cmd_line_pool": pool_name} else: raise NetTestError("Pool %s does not exist!" % pool_name) else: select_pools = conf_pools # a PoolManager, that loads the slave description XML files and checks # the availability of test hosts. Defines an API to access the available # pools usable by the Mapper to match against requirements. I'm thinking # this could also be made an argument of the __init__ method when the # API is defined, the users could then replace both the Mapper and the # PoolManager... although I don't think it would be used very much... self._pools = SlavePoolManager(select_pools, pool_checks)
def run(self, recipe, **mapper_kwargs): # this method implements the main loop of the recipe execution, the # mapper_kwargs are arguments passed to the mapper to influence the # mapping process, such as to try all matches, or to enable virtual # matches and so on
for match in self._mapper.matches(**mapper_kwargs): matched = True # uses the current match to map the Machine objects from the # PoolManager to create Host objects for the Recipe self._map_match(match) self._print_match_description(match) try: recipe.test() except Exception as exc: logging.error("Recipe execution terminated by unexpected exception") raise finally: for machine in self._machines.values(): machine.restore_system_config() self._cleanup_slaves()
Host objects available in self.matched.selector_name:
class Host: #name can change #attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86": # I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes # they should also be iterable by: # for param in host.params: params = object()
!!! params sounds odd? other possibilities 'property', 'description'? !!! m1.property.arch == "x86" ? !!! m1.description.arch == "x86" ? !!! we've also discussed that it would probably be easiest to just have a !!! predefined static set of these attributes that are checked on !!! lnst-slave startup and are sent to the Controller on connection.
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.eth0.set_ip(...) # I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # to deconfigure+remove call m1.team0.remove() __devs = object()
# device iterator for the Host object: # for dev in host:
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns, desc) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # IGNORE FOR NOW # "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # "desc" is a description message that will show up in logs and the # result summary # Returns a Job object # !! Exceptions?!?!
# !!!THINK ABOUT THESE...
def __set(path, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sysfs_set(path, value): # check if path starts with "/sys"? self.__set(path, value)
def procfs_set(path, value): self.__set("/proc/"+path, value)
def send_file(ctl_path="", slave_path="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def recv_file(recv_path="", ctl_path="", recursive=False)
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable # !!! REMOVE
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # !!! predefine the params, assignment will set them, sync on set - wait # !!! on the slave for the set to finish and return the current status of # !!! everything else as well # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... # The way Jobs are launched is different from how we handled them with XML # recipes -> on Slaves, ALL jobs are run in a separate process ("in # background"), without a timeout, and they all have an id (previously # foreground commands had id == None) # # The distinction of foreground/background Jobs is handled by the Hosts # "run" method -> for background Jobs, the method immediately returns with # the Job handle (instance of this class), for foreground Jobs, the method # calls the wait method of this class (default timeout) and if it times out, # it calls the kill method of this class (SIGKILL) # # For now, I'm just considering Module and "Shell command" Jobs, system # configs will probably be handled in a different way... # # Job results can be picked up at any time the execution is handed over to # the LNST library - the message carrying results from the slave will be # handled by the connection handler and sent to the correspoding Job # object. This is relevant for background Jobs only as foreground Jobs will # always be in a finished state after the Host run method returns
#attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # in case the Job finished while this method call is executed then the # job just finishes normally and the signal isn't sent # "signalnum" the signal to be sent, default is SIGKILL
def __str__(self): # for easy print m1.run(what) return stdout+stderr # what about Modules?
================================================
- Result summary proposal
Since I've changed how Job execution is handled, I've also wrote down a proposal to change how we log Recipe results - the RESULTS SUMMARY logs at the end of a recipe run. I haven't started working on it yet, I've just wrote an example on paper which I'm copying here. Any comments are appreciated.
RESULTS SUMMARY: Host m1 Job 1 XYZ PASS/FAIL Formatted results: ... Host m2 Job 1 XYZ started Host m1 Job 3 XYZ PASS/FAIL Formatted results: ... Host m2 Job 1 XYZ PASS/FAIL Formatted results: ... Custom summary record.... (optional PASS/FAIL) ... optional additional data ... i still need to figure out how this will look like
The main difference to the old results summary is that Jobs have numerical ids that are unique per host, and you ALWAYS see the id (previously only background commands had ids). Since all Jobs "run in background" this will make matching "started" "finished" logs easier. There also won't be any more "kill cmd" "intr cmd" logs here since these commands don't exist anymore.
Since "all Jobs are in background" it means that in reality all of them generate a "started" and "finished" log, however, if these are in a direct sequence after each other they get shortened to just the PASS/FAIL log. This will also be true for background commands if there were no results to report between their start and finish.
Hi everyone,
as promised on the upstream meeting on Monday, I'm sending a new version of the spec file.
what changed: * imports for TestModules no longer need to be inside the 'run()' method. This is because using the dill library for sending object instances doesn't work as expected. I had to move to sending the entire module source file to the slave where it gets dynamically imported. This way I can send an object instance from the controller to the slave, using just the normaln cPickle module. * added a comment from jpirko to Job implementation -> think about supporting select() calls on background Jobs. * added section for Device API. This includes an entire redesign of how Devices are handled, by both the Controller and the Slave. The section describes the general approach of unifying implementation for both the Controller and the Slave and it describes the functionality of the base Device class that represents a simple hardware Ethernet device. I'm sure things are missing as I'm still working on it, so feel free to send comments about what you would like to see there.
As usual comments are encouraged and appreciated.
Attaching the doc here:
1. test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, Param): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
#check mandatory parameters for name, param in self.params: if param.mandatory and not param.set: raise TestModuleException("Parameter {} is mandatory".format(name))
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType(mandatory=True) param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#all imports used by the test module need to happen HERE, reason is #that the object is instantiated on the Controller and THEN sent to the #slave -> stuff imported on the Controller is not available on the Slave
#NOT ANYMORE!!! since dill module can't send classes imported from a #module, just classes defined in the __main__ module I had to discard #this option, this lead me to just sending the whole module file (.py) #to the slave, dynamically importing it and then sending the module #instance. This can be done with the basic cPickle module and it also #means that we can do our imports at the top of the module instead of #in the run() method...
#in Task: import lnst #module lnst.tests will dynamically look for module classes in configured #locations, similar to how we do it now #or you can import BaseTestmodule and define your test module directly in your recipe
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
================================================
2. Recipe: Renamed Task to Recipe... since the class represents the "whole" LNST workflow of setting requirements and defining the test
class BaseRecipe(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) if instance(val, Requirement): setattr(self.requirements, x, val)
# check types of parameters, the Param class hierarchy is the same as # the one used by BaseTestModule self.params = object() for x in dir(self): val = getattr(self, x) if instance(val, Param): setattr(self.params, x, val)
# check mandatory parameters for name, param in self.params: if param.mandatory and not param.set: raise RecipeException("Parameter {} is mandatory".format(name))
def test(): raise Exception("Method test MUST be defined.")
class MyRecipe(lnst.BaseRecipe): #class-wide definition of requirements m1 = HostReq(arch="x86_64", ...) m1.if1 = IfaceReq(label="xyz", driver="mlx", ...)
#m1.params = IfaceReq() #m1.devs = IfaceReq() #raise Exception("param/devs name is a reserved keyword")
m2 = HostReq(param="val", ...) m2.if1 = IfaceReq(label="xyz", param="val", ...)
# idosch requested the ability to "blacklist" parameters, jpirko's # suggestion was to use regular expressions in the mapper. This should be # easy to do, though it's not entirely "pretty" since regexes are not very # good at doing negative conditions... maybe we can think of something # better?
#class-wide definition of Task parameters, with type checking, #possibly even type conversions (e.g. string to int), using the same #*Type classes as with Module parameters mtu = IntType(default=1500)
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs interface_driver = kwargs["driver"] self.reqs.m1.if1.driver = interface_driver #adjust instance specific requirements self.reqs.m3 = HostReq(...)
def test(self): self.matched.m1.run(Module) self.matched.m1.run("command")
# optionally we can add support for something like this, but at the moment # we think it could be confusing and/or complicated to do right #def test(m1, m2, m3, m4,...): # m1.run(Module) # m2.run("command") # m1.params.arch == "x86_64"
================================================
3. Running Recipes:
my_test_script.py: #!/bin/python from MyRecipes import MyRecipe from lnst import Controller
recipe_instance = MyRecipe(mtu="5000")
ctl = Controller(config="/etc/lnst-ctl.conf") ctl.run(recipe_instance)
chmod +x my_test_script.py ./my_test_script.py args
OR
lnst-ctl -d run MyRecipe.py -- mtu=8000 # looks for NAME class in the NAME.py file (MyRecipe in this case for which # the condition "isinstance(NAME, BaseRecipe)" must be True
# could also run for all classes in the file where "isinstance(x, BaseRecipe)" # is True. with the option to restrict to specific task class... lnst-ctl # rewritten to do the same as manually running the task from it's own python # script
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
================================================
4. Tester facing API:
#!!!! breakpoints!!!
class Controller: # This is the class which serves as an entry point to using LNST in a # library-like way to run a controller - as in running a Recipe from it's # own executable python script" # It's partially based on the old NetTestController class
def __init__(self, debug=0, mapper=MachineMapper, pools=[], pool_checks=True): # defines the logging level self._debug = debug self._mac_pool = MacPool() # class controlling logging self._log_ctl = LoggingCtl(debug, ...) # dictionary of dynamically created networks (libvirt) self._network_bridges = {}
# a ConnectionHandler for communicating with slaves self._msg_dispatcher = MessageDispatcher(self._log_ctl)
# a Mapper class - handling the matching of requirements to the Hosts # available in pools, you can define your own class that supports the # API (specified later) to do your own matching API, the basic idea is # for the Mapper to get requirements in a certain format and the pools # in a certain format and to generate mappings between these two self._mapper = mapper()
# pool loading - from a config file, and optionaly restricted from the # pools argument, functionality copied from the NetTestController, I'm # thinking this could change (pools default value is grabbed from # config, so that the user can override them completely?) select_pools = {} conf_pools = ctl_config.get_pools() if len(pools) > 0: for pool_name in pools: if pool_name in conf_pools: select_pools[pool_name] = conf_pools[pool_name] elif len(pools) == 1 and os.path.isdir(pool_name): select_pools = {"cmd_line_pool": pool_name} else: raise NetTestError("Pool %s does not exist!" % pool_name) else: select_pools = conf_pools
# a PoolManager, that loads the slave description XML files and checks # the availability of test hosts. Defines an API to access the available # pools usable by the Mapper to match against requirements. I'm thinking # this could also be made an argument of the __init__ method when the # API is defined, the users could then replace both the Mapper and the # PoolManager... although I don't think it would be used very much... self._pools = SlavePoolManager(select_pools, pool_checks)
def run(self, recipe, **mapper_kwargs): # this method implements the main loop of the recipe execution, the # mapper_kwargs are arguments passed to the mapper to influence the # mapping process, such as to try all matches, or to enable virtual # matches and so on
for match in self._mapper.matches(**mapper_kwargs): matched = True # uses the current match to map the Machine objects from the # PoolManager to create Host objects for the Recipe self._map_match(match) self._print_match_description(match) try: recipe.test() except Exception as exc: logging.error("Recipe execution terminated by unexpected exception") raise finally: for machine in self._machines.values(): machine.restore_system_config() self._cleanup_slaves()
Host objects available in self.matched.selector_name:
class Host: #name can change #attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86": # I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes # they should also be iterable by: # for param in host.params: params = object()
!!! params sounds odd? other possibilities 'property', 'description'? !!! m1.property.arch == "x86" ? !!! m1.description.arch == "x86" ? !!! we've also discussed that it would probably be easiest to just have a !!! predefined static set of these attributes that are checked on !!! lnst-slave startup and are sent to the Controller on connection.
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.eth0.set_ip(...) # I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # to deconfigure+remove call m1.team0.remove() __devs = object()
# device iterator for the Host object: # for dev in host:
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns, desc) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # IGNORE FOR NOW # "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # "desc" is a description message that will show up in logs and the # result summary # Returns a Job object # !! Exceptions?!?!
# !!!THINK ABOUT THESE... def __set(path, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sysfs_set(path, value): # check if path starts with "/sys"? self.__set(path, value)
def procfs_set(path, value): self.__set("/proc/"+path, value)
def send_file(ctl_path="", slave_path="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def recv_file(recv_path="", ctl_path="", recursive=False)
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable # !!! REMOVE
class Job: #ProcessAPI? name can change... # The way Jobs are launched is different from how we handled them with XML # recipes -> on Slaves, ALL jobs are run in a separate process ("in # background"), without a timeout, and they all have an id (previously # foreground commands had id == None) # # The distinction of foreground/background Jobs is handled by the Hosts # "run" method -> for background Jobs, the method immediately returns with # the Job handle (instance of this class), for foreground Jobs, the method # calls the wait method of this class (default timeout) and if it times out, # it calls the kill method of this class (SIGKILL) # # For now, I'm just considering Module and "Shell command" Jobs, system # configs will probably be handled in a different way... # # Job results can be picked up at any time the execution is handed over to # the LNST library - the message carrying results from the slave will be # handled by the connection handler and sent to the correspoding Job # object. This is relevant for background Jobs only as foreground Jobs will # always be in a finished state after the Host run method returns
#NOTE!!! simulate file descriptor??? so select() works
#attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # in case the Job finished while this method call is executed then the # job just finishes normally and the signal isn't sent # "signalnum" the signal to be sent, default is SIGKILL
def __str__(self): # for easy print m1.run(what) return stdout+stderr # what about Modules?
================================================
5. Result summary proposal
Since I've changed how Job execution is handled, I've also wrote down a proposal to change how we log Recipe results - the RESULTS SUMMARY logs at the end of a recipe run. I haven't started working on it yet, I've just wrote an example on paper which I'm copying here. Any comments are appreciated.
RESULTS SUMMARY: Host m1 Job 1 XYZ PASS/FAIL Formatted results: ... Host m2 Job 1 XYZ started Host m1 Job 3 XYZ PASS/FAIL Formatted results: ... Host m2 Job 1 XYZ PASS/FAIL Formatted results: ... Custom summary record.... (optional PASS/FAIL) ... optional additional data ... i still need to figure out how this will look like
The main difference to the old results summary is that Jobs have numerical ids that are unique per host, and you ALWAYS see the id (previously only background commands had ids). Since all Jobs "run in background" this will make matching "started" "finished" logs easier. There also won't be any more "kill cmd" "intr cmd" logs here since these commands don't exist anymore.
Since "all Jobs are in background" it means that in reality all of them generate a "started" and "finished" log, however, if these are in a direct sequence after each other they get shortened to just the PASS/FAIL log. This will also be true for background commands if there were no results to report between their start and finish.
================================================
6. Devices
I've removed the Device API description from the "Tester facing API" section and moved everything here. The reason for that is that this is both a description of the tester facing API for the Devices and it's also a description of a complete rewrite of how LNST handles Devices.
The rough idea is to have a unified implementation of "Devices" for both the Controller and the Slave. Since on the controller we were always just calling "empty wrapper methods" to real methods on the Slave. This functionality is similar to what the patchset from Nogah Frankel achieved, but IMO takes it a step further, though for now it's only for Devices (still thinking about the possibility to expand this to the entire Slave instance).
First of all, when I say "unified implementation", it means that there is no implementation of any "Device" class present on the slave statically. All the implementation (creation/destruction, getters/setters, logic, everything) is present on the Controller, and it is sent to the Slave at the start of the recipe execution. This is kind of similar to how we've done resource syncing (test tools and test modules), but for Device classes/modules.
The downside of this is that all python import dependencies must be available not only on the Slave but on the Controller as well... Right now that just means pyroute2 which was already a depedency due to how we've wrapped socket.read in a Common module but in the future that could mean other modules as well...
Everything related to Devices is located in the python package lnst.Devices, here's the directory structure:
lnst/Controller/ # the Controller package /Slave/ # the Slave package /Common/ # common package, present on both the Controller and # Slave /RecipeCommon/ # common recipe modules, present only on Controller /Devices/ # Devices package, present only on Controller /__init__.py # package wide definitions /Device.py # defines the base Device class, representing a "basic # ethernet" device, implements functionality common # for all devices (up/down, netlink, etc...) /RemoteDevice.py # defines a RemoteDevice class, that wraps all # Device derived classes and provides access to # their instances on the Slave # it also defines a remotedev_decorator decorator /SoftDevice.py # defines a SoftDevice class, that all software # devices derive from, currently it doesn't do # anything, it just exists for class hierarchy # organization, but that might change /VirtualDevice.py # defines a VirtualDevice class that derives the # RemoteDevice class, specificaly bound to a # Device class. Creation of a VirtualDevice object # will cause the creation of a libvirt device, # that on the Slave will automatically be # represented by a Device instance. /BridgeDevice.py # and any number of other modules defining classes # derived from the SoftDevice class, these need to # define the create and destroy methods and will # implement their own specific functionality.
With that, I'll describe the API of the base Device class, which at this point, is the most complete one, since it's mostly copied over from the old Device class used on the Slave.
class Device: # attributes are predefined by the Class definition. If there's something # missing it can be simply added to the implementation and since there's # only one implementation for both the Controller and the Slave, the change # will be automatically reflected on both. # Each specific class derived from the Device class can of course define # it's own attributes... The recommended way is to either directly define an # attribute that can be accessed, or to define a method and decorate it with # the @property decorator if the value getter is more complicated.
# current attributes: if_index = Int # unique interface index used by the kernel ifi_type = Int # type of device, @property reading from a netlink message name = string # name of the device, @property reading from a netlink message hwaddr = string # hw address of the device, @property reading from a # netlink message state = 'UP' or 'DOWN' # state of the device, @property reading from a # netlink message driver = string # implemented as @property reading from a ethtool mtu = something # implemented as @property reading from a netlink message ips = [IpAddress, ...] # IpAddress objects should use the Common.Params # IpParam class master = Device object # @property checks IFLA_MASTER in a netlink message # and searches for a Device derived object in the # interface manager link_stats = dict # @property parsed dictionary from an # 'ip -s link show' output
# RemoteDevice class implementation allows for iteration of all # attributes not starting with '_' or '__' on the device it represents, # e.g.: # for name, val in self.matched.m1.eth0: # print "%s = %s" % (name, val) # # would print all attributes associated with this Device
# methods: def __init__(self, if_manager): # needs a reference to the interface manager (to optionally access # other devices when relevant, e.g. master/slave relationships)
# Remember, these objects are created by the Slave, so this isn't a # tester-facing method, but it's important to describe it in case you # want to create your own Device or Device-derived class.
def create(self): # raises an Exception - you can't create hardware... this method is # here to be overriden by subclasses defining Software devices
def destroy(self): # returns True, does nothing - you also can't destroy hardware... but # since this method is called automatically during deconfiguration it # wouldn't be a good idea to raise an Exception.
# I think I might need to reconsider this to also raise an Exception if # I can handle this nicer in the implementation which I'm not sure about # yet...
def _set_devlink(self): # copied from old class
# TODO figure out if devlink still works with these classes
def _init_netlink(self, nl_msg): # initializes this object instance with data from the supplied netlink # message
def _update_netlink(self, nl_msg): # updates object data based on the contents of the netlink message
def up(self): # sets link state UP
def down(self): # sets link state DOWN
def clear_ips(self): # clears all configured ip addresses of the device
Tue, Nov 29, 2016 at 10:50:56AM CET, olichtne@redhat.com wrote:
Hi all,
for the past couple of weeks I've been going over the meeting recordings we've had wrt the new Python API of LNST. I've been collecting everything into a single file that I'm appending to this email. I'm sending it here so that everyone can join the discussion before the implementation itself begins. I'll warn you thougn... it's LONG :)
!!!NOTE it's not complete yet, I'm sending it now because we have an upstream meeting planned for later today, namely Device/Interface API is not complete.
I'm sorry I missed that meeting (I believe I did :))
The structure of the file is following:
- commented pseudo code of how Test Modules will look like - they'll be
instantiated on the Controller and send ad-hoc to the slave where they'll be executed --> no more synchronization on test start...
- commented pseudo code of how Tasks will look like, they'll define
both the network requirements and the test execution as well.
short rough idea of how the tests/recipes will be executed.
1st version of the API "specification"/documentation. Here I tried to
go through the current *API objects we currently have and make them more "Pythonic", thinking of how they'll be used from a Task. I tried writing it as class-method-attribute definitions with some documentation so hopefully it makes some sense... Like I've said before, Device/Interfaces are not complete so there's a lot missing there.
Please take a look and provide feedback. I'm sure there are other parts in addition to Device/Interface APIs that are missing something so I'll appreciate any help :).
Thanks for doing this! Will review the updated document in the follow-up patch.
================================================================================ new_api file:
- test modules
class BaseTestModule: def __init__(self, **kwargs): #by defaults loads the params into self.params - no checks pseudocode: for x in vars(self): if isinstance(x, BaseType): param_class = self.getattr(x) try: val = kwargs[x] except KeyError: if param_class.is_mandatory(): raise TestModuleError("Option x is mandatory") self.setattr(x.params, param_class.construct(val)) del kwargs[x] for x in kwargs.keys(): log.error("Undefined parameter x") if len(kwargs): raise TestModuleError("Undefined TestModule parameters")
def run(): #needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule): param = ParamType() param2 = ParamType2() param3 = Multiparam(ParamType())
#optional __init__ #def __init__(self, **kwargs): #super(MyTest).__init__(kwargs) #additional tester defined checks
def run(): #do my test #parameters available in self.params
#in Task: import lnst #module lnst.modules will dynamically look for module classes in configured #locations, similar to how we do it now
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
================================================
- Tasks:
class BaseTask(object): def __init__(self): #initialize instance specific requirements self.requirements = Requirements() for x in dir(self): val = getattr(self, x) setattr(self.requirements, x, val)
def test(): raise Exception("Method test MUST be defined.")
class MyTask(lnst.BaseTask): #class-wide definition of requirements m1 = HostSel(param="val", ...) m1.if1 = IfaceSel(l2net="xyz", param="val", ...)
m2 = HostSel(param="val", ...) m2.if1 = IfaceSel(l2net="xyz", param="val", ...)
def __init__(self, **kwargs): super(self, lnst.BaseTask).__init__()
#do something with kwargs #adjust instance specific requirements self.requirements.m3 = HostSel(...)
def test(): self.matched.m1.run(Module) self.matched.m1.run("command") #or def test(m1, m2): m1.run(Module) m2.run("command")
================================================
- Running Tasks:
from MyTasks import MyTask import lnst
task_instance = MyTask(params)
lnst(args) lnst.run(task_instance)
OR
lnst-ctl -d run MyTask.py -- task_params # looks for NAME class in the NAME.py file (MyTask in this case for which # the condition "isinstance(NAME, BaseTask)" must be True
# could also run for all classes in the file where "isinstance(x, BaseTask)" is # True. with the option to restrict to specific task class (or just run the # first one?)... lnst-ctl rewritten to do the same as manually running the # task from it's own python script
First do the second option - easier since we have this already, then refactor the controller to create the lnst controller for the first option.
Aliases lose meaning - they're parameters passed to the MyTask __init__, when using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
================================================
- Tester facing API, inside the test() method:
Host objects available in self.matched.selector_name:
class Host: #HostAPI??? name can change #attributes:
# dynamically filled object of Host attributes such as architecture and # so on. Use example in test() would look like this: # if host.params.arch == "x86": # I separated this into the "params" object so I can overwrite its # __getattr__ method and return None/UnknownParam exception for unknown # parameters, and to avoid name conflicts with other attributes params = object()
# dynamically filled object of NetDevice objects accessible directly as the # object attributes: # host.ifaces.eth0.set_ip(...) # I separated this into the "ifaces" object to avoid name conflicts with # other attributes # creation of new NetDevices should be possible through simple assignement: # m1.devs.new_team0 = TeamDevice(...) # assignement of an incompatible Type or to an existing Device object will # return an exception # assignment of None? or del devs.new_team0 to deconfigure the device? devs = object()
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns) # will run "what" on the remote host # "what" is either a Module object, or a string command that will be # executed as a bash command # "bg" when True, runs "what" on background - the run() call # immediately returns, and "timeout" is ignored, the background # process can be controlled through the returned Job object # "fail" if True then the Job is expected to fail, and will be reported # as PASSed if it does # "timeout" in seconds, determines how long to block test execution for # before killing the Job. Only when running in foreground # "path" changes the current working directory to the specified path # before "what" is executed and changes back after execution is # finished. # "tool" changes the current working directory to the directory of a # speficied test_tool before "what" is executed and changes back # after execution is finished. # !!!!!!! this is from the current API and i'm not yet sure how we # !!!!!!! want to handle those... so for now I'll keep it # "json" if True will attempt to parse the returned stdout of the Job # as json into a dictionary # "netns" Job will be run in the specified network namespace # Returns a Job object
def config(option, value) # copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path" # and returns the original value when the test is finished
def sync_resources(srcpath="", dstpath="", recursive=False) # copies the specified file from the controller to the specified # destination path, if recursive == True and srcpath refers to a # directory it copies the entire directory
def {enable, disable}_service(service) # copied from old API, enables or disables the specified service
def add_{bond, bridge,...}(params) # this is how we can currently dynamically create net devices on the # hosts. Even with the new assignment-based approach this could still, # be usefull, though the method would need to be dynamically created to # avoid useless work when adding a new netdev type. Something like: # add_device("name", "Type", params) which would then do # self.devs.name = TypeDevice(params) ??
def del_device(name) # removes the specified device, probably easier (more logical?) to do # this then "devs.name = None" and "del devs.name" would be unreliable
class Device: #DeviceAPI, InterfaceAPI? name can change... # attributes:
# dynamically created Device attributes such as driver and so on. Use # example in test() would look like this: # if host.devs.eth0.driver == "ixgbe": # achieved through rewriting of the __getattr__ method of the Device class # should return None or throw UnknownParam exception for unknown parameters # this should directly mirror the Device objects that are managed by the # InterfaceManager on the Slave # eg: driver = something mtu = something ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change... #attributes:
# True if the Job finished, False if it's still running in the background finished = bool
# contains the result data returned by the Job, None for bash commands result = object
# contain the stdout and stderr generated by the job, None for Module Jobs stdout = "" stderr = ""
# simple True/False value indicating success/failure of the Job passed = bool
def wait(timeout=0): # for background jobs, will wait until the job finished # "timeout" in seconds, determines how long to wait for. After timeout # reached, nothing happens, status of the job can be checked with the # "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL): # sends the specified signal to the process of the Job running in # background # "signalnum" the signal to be sent _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org
lnst-developers@lists.fedorahosted.org