From: Ondrej Lichtner olichtne@redhat.com
What follows is a big patch set that is the result of my work to port the phase3/ovs_dpdk_pvp recipe to python. The recipe should be ported in almost complete support of everything that the old recipe did (except for result evaluation) and includes a significant refactoring of the code.
Instead of working with a hackish ssh tunnel to manipulate the guest the patch set also introduces a new feature allowing the tester to connect to a LNST Slave during test execution. This significantly improves working with the guest since we can now safely wrap the testpmd process into a test module and have a comfortable interface for it. This removes the python paramiko library dependency of the recipe.
Another significant improvement is a better way to wrap the TRex generator into test modules that avoids the use of a tmux session we've used previously this means that the recipe no longer has this dependency.
Finally the patchset also includes a new feature of being able to synchronize and use arbitrary classes in the lnst.RecipeCommon package into any slave. This gives the tester a very nice interface to use for extending the slave functionality. In the OvS_DPDK_PvP recipe this is used to interface with libvirt on the slave machine. This means that the controller can now run on any machine and be able to work with libvirt on any other slave instead of the confusing forced binding we've had before.
On top of these new features, the patch set also includes other smaller features, a lot of bug fixes and refactoring of some classes.
-Ondrej
Ondrej Lichtner (40): lnst.Common.DeviceError: define the DeviceReadOnly exception lnst.Devices.RemoteDevice: add caching capability lnst.Slave.NetTestSlave: refactor names of dev_*attr methods lnst.Device.Device: improve cleanup data storage lnst.Device.Device: raise DeviceDeleted exception on netlink updates lnst.Device.Device: add bus_info property lnst.Slave.Job: catch all exceptions from Test Modules lnst.Slave.Job: cleanup kill only running jobs move lnst.Common.TestModule to lnst.Tests.BaseTestModule, add wait_for_interrupt lnst.Common.Parameters: allow deletions for the Parameters class lnst.Controller.MachineMapper: sort interfaces in machine descriptions lnst.Controller.Machine: add mapped boolean lnst.Controller.MessageDispatcher: refactor wait_* methods lnst.Controller.Machine: expose init_connection as public method lnst.Controller.SlavePoolManager: enable machines without interfaces lnst.Controller.Machine: split set_recipe into prepare_machine and start_recipe lnst.Controller.Machine: move VirtualDevice cleanup to Controller lnst.Controller.Machine: refactor sending classes to Slaves lnst.Slave.NetTestSlave: track dynamic classes by module name as well lnst.Slave.NetTestSlave: create the dynamic RecipeCommon module lnst.Slave.NetTestSlave: support objects from dynamically received classes add lnst.Controller.SlaveObject add lnst.Controller.RecipeControl lnst.Controller.Host: expose the map_device api to the tester lnst.Controller.Requirements: add RecipeParam class lnst.Controller.RunSummaryFormatter: change format for list items lnst.Controller.Machine: small refactoring lnst.Slave.InterfaceManager: fix deleted device handling lnst.RecipeCommon.PerfResult: fix standard deviation calculation lnst.RecipeCommon.PerfResult: override std_deviation of PerfInterval and add string descrition lnst.RecipeCommon.Ping: add parameters to PingConf lnst.RecipeCommon.Perf: minor refactoring setup.py: use setuptools instead of distutils and improve package management add lnst.RecipeCommon.LibvirtControl add lnst.Tests.TRex add lnst.Tests.TestPMD add lnst.RecipeCommon.TRexMeasurementTool add lnst.Recipes.ENRT.OvS_DPDK_PvP lnst.Recipes.ENRT.BaseEnrtRecipe: fix indentation lnst.Slave.InterfaceManager: disable bulk mode after device creation
lnst/Common/DeviceError.py | 3 + lnst/Common/Parameters.py | 3 + lnst/Controller/Controller.py | 41 +- lnst/Controller/Host.py | 8 +- lnst/Controller/Job.py | 2 +- lnst/Controller/Machine.py | 159 +++---- lnst/Controller/MachineMapper.py | 2 +- lnst/Controller/MessageDispatcher.py | 116 +++-- lnst/Controller/Recipe.py | 13 +- lnst/Controller/RecipeControl.py | 64 +++ lnst/Controller/Requirements.py | 47 ++- lnst/Controller/RunSummaryFormatter.py | 14 +- lnst/Controller/SlaveObject.py | 41 ++ lnst/Controller/SlavePoolManager.py | 7 +- lnst/Controller/__init__.py | 2 +- lnst/Devices/Device.py | 48 ++- lnst/Devices/RemoteDevice.py | 36 +- lnst/RecipeCommon/LibvirtControl.py | 41 ++ lnst/RecipeCommon/Perf.py | 90 ++-- lnst/RecipeCommon/PerfResult.py | 10 +- lnst/RecipeCommon/Ping.py | 35 +- lnst/RecipeCommon/TRexMeasurementTool.py | 87 ++++ lnst/Recipes/ENRT/BaseEnrtRecipe.py | 14 +- lnst/Recipes/ENRT/OvS_DPDK_PvP.py | 399 ++++++++++++++++++ lnst/Slave/InterfaceManager.py | 14 +- lnst/Slave/Job.py | 6 +- lnst/Slave/NetTestSlave.py | 81 +++- .../TestModule.py => Tests/BaseTestModule.py} | 20 + lnst/Tests/Iperf.py | 2 +- lnst/Tests/Netperf.py | 3 +- lnst/Tests/Ping.py | 2 +- lnst/Tests/TRex.py | 159 +++++++ lnst/Tests/TestPMD.py | 47 +++ setup.py | 6 +- 34 files changed, 1356 insertions(+), 266 deletions(-) create mode 100644 lnst/Controller/RecipeControl.py create mode 100644 lnst/Controller/SlaveObject.py create mode 100644 lnst/RecipeCommon/LibvirtControl.py create mode 100644 lnst/RecipeCommon/TRexMeasurementTool.py create mode 100644 lnst/Recipes/ENRT/OvS_DPDK_PvP.py rename lnst/{Common/TestModule.py => Tests/BaseTestModule.py} (85%) create mode 100644 lnst/Tests/TRex.py create mode 100644 lnst/Tests/TestPMD.py
From: Ondrej Lichtner olichtne@redhat.com
This exception will be used by the RemoteDevice class when the instance was set to the caching read only mode and the user attempted to call a method or assign a value to an attribute.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/DeviceError.py | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/lnst/Common/DeviceError.py b/lnst/Common/DeviceError.py index b9fde59..c89541e 100644 --- a/lnst/Common/DeviceError.py +++ b/lnst/Common/DeviceError.py @@ -27,5 +27,8 @@ class DeviceNotFound(DeviceError): class DeviceConfigError(DeviceError): pass
+class DeviceReadOnly(DeviceError): + pass + class DeviceConfigValueError(DeviceConfigError): pass
From: Ondrej Lichtner olichtne@redhat.com
RemoteDevice objects that on the Controller act as proxies for the real Device objects on Slaves can now be switched into a cached read only mode.
In this mode, the RemoteDevice object stores all the property values of the Slave Device locally on the Controller. When the tester attempts to access a property this cached value is returned and no proxy call to the Slave happens.
Method calls and setting new values to a property is disabled as this is a read only mode. Unfortunatelly this means that methods that don't change the device state (e.g. ips_filter) are also unaccessible. However it also means that the cached RemoteDevice object can be used even if the Device doesn't exist on the Slave anymore.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Devices/RemoteDevice.py | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-)
diff --git a/lnst/Devices/RemoteDevice.py b/lnst/Devices/RemoteDevice.py index ddd47f1..8a3f0cf 100644 --- a/lnst/Devices/RemoteDevice.py +++ b/lnst/Devices/RemoteDevice.py @@ -13,7 +13,7 @@ olichtne@redhat.com (Ondrej Lichtner)
from copy import deepcopy from lnst.Devices.Device import Device -from lnst.Common.DeviceError import DeviceDeleted +from lnst.Common.DeviceError import DeviceDeleted, DeviceReadOnly
def remotedev_decorator(cls): def func(*args, **kwargs): @@ -36,6 +36,10 @@ class RemoteDevice(object): self._machine = None self.ifindex = None self.deleted = False + + self._cache = {} + self._cached = False + self._inited = True
def __deepcopy__(self, memo): @@ -49,6 +53,20 @@ class RemoteDevice(object): newone._inited = deepcopy(self._inited, memo) return newone
+ def enable_readonly_cache(self): + self._cache = {} + for name, val in self: + self._cache[name] = val + self._cached = True + + def disable_readonly_cache(self): + self._cache = {} + self._cached = False + + def update_readonly_cache(self): + self.disable_readonly_cache() + self.enable_readonly_cache() + @property def _dev_cls(self): return self.__dev_cls @@ -85,16 +103,22 @@ class RemoteDevice(object):
attr = getattr(self._dev_cls, name)
- if self.deleted: + if self.deleted and not self._cached: raise DeviceDeleted("This device was deleted on the slave and does not exist anymore.")
if callable(attr): + if self._cached: + raise DeviceReadOnly("Can't call methods when in ReadOnly cache mode.") + def dev_method(*args, **kwargs): return self._machine.rpc_call("dev_method", self.ifindex, name, args, kwargs, netns=self.netns) return dev_method else: + if self._cached: + return self._cache[name] + return self._machine.rpc_call("dev_attr", self.ifindex, name, netns=self.netns)
@@ -104,6 +128,10 @@ class RemoteDevice(object):
try: getattr(self._dev_cls, name) + + if self._cached: + raise DeviceReadOnly("Can't set attributes when in ReadOnly cache mode.") + return self._machine.rpc_call("dev_set_attr", self.ifindex, name, value, netns=self.netns) except AttributeError:
From: Ondrej Lichtner olichtne@redhat.com
getattr and setattr are commonly used so we should be consistent with these names.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Devices/RemoteDevice.py | 4 ++-- lnst/Slave/NetTestSlave.py | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/lnst/Devices/RemoteDevice.py b/lnst/Devices/RemoteDevice.py index 8a3f0cf..00d937b 100644 --- a/lnst/Devices/RemoteDevice.py +++ b/lnst/Devices/RemoteDevice.py @@ -119,7 +119,7 @@ class RemoteDevice(object): if self._cached: return self._cache[name]
- return self._machine.rpc_call("dev_attr", self.ifindex, name, + return self._machine.rpc_call("dev_getattr", self.ifindex, name, netns=self.netns)
def __setattr__(self, name, value): @@ -132,7 +132,7 @@ class RemoteDevice(object): if self._cached: raise DeviceReadOnly("Can't set attributes when in ReadOnly cache mode.")
- return self._machine.rpc_call("dev_set_attr", self.ifindex, name, value, + return self._machine.rpc_call("dev_setattr", self.ifindex, name, value, netns=self.netns) except AttributeError: return super(RemoteDevice, self).__setattr__(name, value) diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index 795ec1d..1bb8f23 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -174,11 +174,11 @@ class SlaveMethods:
return method(*args, **kwargs)
- def dev_attr(self, ifindex, name): + def dev_getattr(self, ifindex, name): dev = self._if_manager.get_device(ifindex) return getattr(dev, name)
- def dev_set_attr(self, ifindex, name, value): + def dev_setattr(self, ifindex, name, value): dev = self._if_manager.get_device(ifindex) return setattr(dev, name, value)
From: Ondrej Lichtner olichtne@redhat.com
Storing cleanup data is moved into the NetTestSlave instead of the device itself. This makes sure that the Device is returned to the state it was in at the start of the recipe instead of at it's creation. This has an impact on virtually created devices that can sometimes be created with duplicated default values (e.g. name) that can conflict during deconfiguration. But also on the generic sanity of the recipe in case someone does some manual changes between recipes while the slave is running, deconfiguration could have returned them back.
Finally, instead of storing cleanup variables individually, store them in a single directory. This should also improve the implementation of the cleanup method and make it easier to add more attributes when necessary.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Devices/Device.py | 37 ++++++++++++++++++++++++------------- lnst/Slave/NetTestSlave.py | 6 ++++++ 2 files changed, 30 insertions(+), 13 deletions(-)
diff --git a/lnst/Devices/Device.py b/lnst/Devices/Device.py index 5ee3a17..3782c95 100644 --- a/lnst/Devices/Device.py +++ b/lnst/Devices/Device.py @@ -62,6 +62,8 @@ class Device(object): self._nl_update = {} self._bulk_enabled = False
+ self._cleanup_data = None + def _set_nl_attr(self, msg, value, name): msg[name] = value
@@ -178,7 +180,6 @@ class Device(object): self.ifindex = nl_msg['index']
self._nl_msg = nl_msg - self._store_cleanup_data()
def _update_netlink(self, nl_msg): if self.ifindex != nl_msg['index']: @@ -255,22 +256,32 @@ class Device(object): for egress_pref in egress_prefs: exec_cmd("tc filter del dev %s pref %s" % (self.name, egress_pref))
- def _store_cleanup_data(self): + def store_cleanup_data(self): """Stores initial configuration for later cleanup""" - self._orig_mtu = self.mtu - self._orig_name = self.name - self._orig_hwaddr = self.hwaddr + if self._cleanup_data: + logging.debug("Previous cleanup data present, possible deconfigration failure in the past?") + + self._cleanup_data = { + "mtu": self.mtu, + "name": self.name, + "hwaddr": self.hwaddr}
- def _restore_original_data(self): + def restore_original_data(self): """Restores initial configuration from stored values""" - if self.mtu != self._orig_mtu: - self.mtu = self._orig_mtu + if not self._cleanup_data: + logging.debug("No cleanup data present") + return + + if self.mtu != self._cleanup_data["mtu"]: + self.mtu = self._cleanup_data["mtu"] + + if self.name != self._cleanup_data["name"]: + self.name = self._cleanup_data["name"]
- if self.name != self._orig_name: - self.name = self._orig_name + if self.hwaddr != self._cleanup_data["hwaddr"]: + self.hwaddr = self._cleanup_data["hwaddr"]
- if self.hwaddr != self._orig_hwaddr: - self.hwaddr = self._orig_hwaddr + self._cleanup_data = None
def _create(self): """Creates a new netdevice of the corresponding type @@ -302,7 +313,7 @@ class Device(object): self.ip_flush() self._clear_tc_qdisc() self._clear_tc_filters() - self._restore_original_data() + self.restore_original_data()
@property def link_header_type(self): diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index 1bb8f23..d99fc54 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -129,6 +129,12 @@ class SlaveMethods: logging.warning("Usage of NM is disabled!") logging.warning("=============================================")
+ for device in self._if_manager.get_devices(): + try: + device.store_cleanup_data() + except DeviceDisabled: + pass + return True
def bye(self):
From: Ondrej Lichtner olichtne@redhat.com
If the Device was deleted the _update_netlink method should raise an Exception. This was probably forgotten previously...
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Devices/Device.py | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/lnst/Devices/Device.py b/lnst/Devices/Device.py index 3782c95..80abdcf 100644 --- a/lnst/Devices/Device.py +++ b/lnst/Devices/Device.py @@ -182,6 +182,9 @@ class Device(object): self._nl_msg = nl_msg
def _update_netlink(self, nl_msg): + if getattr(self, "_deleted"): + raise DeviceDeleted("Device was deleted.") + if self.ifindex != nl_msg['index']: msg = "ifindex of netlink message (%s) doesn't match "\ "the device's (%s)." % (nl_msg['index'], self.ifindex)
From: Ondrej Lichtner olichtne@redhat.com
Returns the bus info (pci address) of the specific Device as reported by ethtool.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Devices/Device.py | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/lnst/Devices/Device.py b/lnst/Devices/Device.py index 80abdcf..20cddcf 100644 --- a/lnst/Devices/Device.py +++ b/lnst/Devices/Device.py @@ -469,6 +469,14 @@ class Device(object): """ return self._nl_msg.get_attr("IFLA_STATS64")
+ @property + def bus_info(self): + try: + return ethtool.get_businfo(self.name) + except IOError as e: + log_exc_traceback() + return "" + def _clear_ips(self): self._ip_addrs = []
From: Ondrej Lichtner olichtne@redhat.com
If a Test Module crashes from any exception, it should be logged and the main slave process notified regardless of the exception type.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/Job.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/lnst/Slave/Job.py b/lnst/Slave/Job.py index 26c5493..a953b0d 100644 --- a/lnst/Slave/Job.py +++ b/lnst/Slave/Job.py @@ -17,7 +17,6 @@ import signal import logging import multiprocessing from lnst.Common.JobError import JobError -from lnst.Common.TestModule import TestModuleError from lnst.Common.ExecCmd import exec_cmd, ExecCmdFail from lnst.Common.ConnectionHandler import send_data from lnst.Common.Logs import log_exc_traceback @@ -257,7 +256,7 @@ class ModuleJob(GenericJob): try: self._result["passed"] = self._what["module"].run() self._result["res_data"] = self._what["module"]._get_res_data() - except TestModuleError as e: + except Exception as e: log_exc_traceback() self._result["passed"] = False self._result["type"] = "module_exception"
From: Ondrej Lichtner olichtne@redhat.com
No need to kill finished jobs, this just spams the debug logs with useless messages.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/Job.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lnst/Slave/Job.py b/lnst/Slave/Job.py index a953b0d..8790487 100644 --- a/lnst/Slave/Job.py +++ b/lnst/Slave/Job.py @@ -48,7 +48,8 @@ class JobContext(object):
def _kill_all_jobs(self): for id in self._dict: - self._dict[id].kill(sig=signal.SIGKILL) + if not self._dict[id]._finished: + self._dict[id].kill(sig=signal.SIGKILL)
def cleanup(self): logging.debug("Cleaning up leftover processes.")
From: Ondrej Lichtner olichtne@redhat.com
The BaseTestModule class should be part of the lnst.Tests package, which gets dynamically sent to the slave when required.
I also added the wait_for_interrupt method to the BaseTestModule class. This is functionality that we've had in the old TestModule classes but didn't yet implement for the new Python recipes.
Even though this doesn't act as any sort of tester API or public code I think it's a piece of code that is going to be reused a lot in many of our test modules so it makes sense to add it to the base class.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 2 +- lnst/Controller/Machine.py | 2 +- lnst/Controller/MessageDispatcher.py | 2 +- lnst/Slave/NetTestSlave.py | 8 ++++++-- .../TestModule.py => Tests/BaseTestModule.py} | 20 +++++++++++++++++++ lnst/Tests/Iperf.py | 2 +- lnst/Tests/Netperf.py | 3 +-- lnst/Tests/Ping.py | 2 +- 8 files changed, 32 insertions(+), 9 deletions(-) rename lnst/{Common/TestModule.py => Tests/BaseTestModule.py} (85%)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 4416d9a..f1feae6 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -14,7 +14,7 @@ olichtne@redhat.com (Ondrej Lichtner) import logging import signal from lnst.Common.JobError import JobError -from lnst.Common.TestModule import BaseTestModule +from lnst.Tests.BaseTestModule import BaseTestModule from lnst.Controller.RecipeResults import ResultLevel
class Job(object): diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index f51798c..11055c8 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -17,7 +17,6 @@ import sys import signal from lnst.Common.Utils import sha256sum from lnst.Common.Utils import check_process_running -from lnst.Common.TestModule import BaseTestModule from lnst.Common.Version import lnst_version from lnst.Controller.Common import ControllerError from lnst.Controller.CtlSecSocket import CtlSecSocket @@ -26,6 +25,7 @@ from lnst.Devices import device_classes from lnst.Devices.Device import Device from lnst.Devices.RemoteDevice import RemoteDevice from lnst.Devices.VirtualDevice import VirtualDevice +from lnst.Tests.BaseTestModule import BaseTestModule
# conditional support for libvirt if check_process_running("libvirtd"): diff --git a/lnst/Controller/MessageDispatcher.py b/lnst/Controller/MessageDispatcher.py index abd3b21..6b12d68 100644 --- a/lnst/Controller/MessageDispatcher.py +++ b/lnst/Controller/MessageDispatcher.py @@ -18,11 +18,11 @@ import logging import copy from lnst.Common.ConnectionHandler import send_data from lnst.Common.ConnectionHandler import ConnectionHandler -from lnst.Common.TestModule import BaseTestModule from lnst.Common.Parameters import Parameters, DeviceParam from lnst.Common.DeviceRef import DeviceRef from lnst.Controller.Common import ControllerError from lnst.Devices.RemoteDevice import RemoteDevice +from lnst.Tests.BaseTestModule import BaseTestModule
def deviceref_to_remote_device(machine, obj): if isinstance(obj, DeviceRef): diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index d99fc54..f98db6c 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -41,7 +41,6 @@ from lnst.Common.DeviceRef import DeviceRef from lnst.Common.LnstError import LnstError from lnst.Common.DeviceError import DeviceDeleted, DeviceDisabled from lnst.Common.DeviceError import DeviceConfigValueError -from lnst.Common.TestModule import BaseTestModule from lnst.Common.Parameters import Parameters, DeviceParam from lnst.Common.IpAddress import ipaddress from lnst.Common.Version import lnst_version @@ -889,6 +888,11 @@ def device_to_deviceref(obj): return obj
def deviceref_to_device(if_manager, obj): + try: + from lnst.Tests.BaseTestModule import BaseTestModule + except: + BaseTestModule = None + if isinstance(obj, DeviceRef): dev = if_manager.get_device(obj.ifindex) return dev @@ -911,7 +915,7 @@ def deviceref_to_device(if_manager, obj): for param_name, param in obj: setattr(obj, param_name, deviceref_to_device(if_manager, param)) return obj - elif isinstance(obj, BaseTestModule): + elif BaseTestModule is not None and isinstance(obj, BaseTestModule): obj.params = deviceref_to_device(if_manager, obj.params) return obj else: diff --git a/lnst/Common/TestModule.py b/lnst/Tests/BaseTestModule.py similarity index 85% rename from lnst/Common/TestModule.py rename to lnst/Tests/BaseTestModule.py index 37d6856..d96c1a4 100644 --- a/lnst/Common/TestModule.py +++ b/lnst/Tests/BaseTestModule.py @@ -10,14 +10,22 @@ __author__ = """ olichtne@redhat.com (Ondrej Lichtner) """
+import time import copy +import signal from lnst.Common.Parameters import Parameters, Param from lnst.Common.LnstError import LnstError
+from lnst.Common.Logs import log_exc_traceback + class TestModuleError(LnstError): """Exception used by BaseTestModule and derived classes""" pass
+class InterruptException(TestModuleError): + """Exception used to handle SIGINT waiting""" + pass + class BaseTestModule(object): """Base class for test modules
@@ -79,5 +87,17 @@ class BaseTestModule(object): def run(self): raise NotImplementedError("Method 'run' MUST be defined")
+ def wait_for_interrupt(self): + def handler(signum, frame): + raise InterruptException() + + try: + old_handler = signal.signal(signal.SIGINT, handler) + signal.pause() + except InterruptException: + pass + finally: + signal.signal(signal.SIGINT, old_handler) + def _get_res_data(self): return self._res_data diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index ec85f3b..213ee04 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -5,10 +5,10 @@ import signal import time import subprocess import json -from lnst.Common.TestModule import BaseTestModule, TestModuleError from lnst.Common.Parameters import IntParam, IpParam, StrParam, Param, BoolParam from lnst.Common.Parameters import HostnameParam from lnst.Common.Utils import is_installed +from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError
class IperfBase(BaseTestModule): def run(self): diff --git a/lnst/Tests/Netperf.py b/lnst/Tests/Netperf.py index 70d4b6d..9e9dd5e 100644 --- a/lnst/Tests/Netperf.py +++ b/lnst/Tests/Netperf.py @@ -5,10 +5,9 @@ import signal import time import subprocess from lnst.Common.Parameters import IntParam, IpParam, StrParam, Param -from lnst.Common.TestModule import BaseTestModule, TestModuleError from lnst.Common.ShellProcess import ShellProcess from lnst.Common.Utils import is_installed, std_deviation - +from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError
class Netserver(BaseTestModule): bind = IpParam(mandatory=True) diff --git a/lnst/Tests/Ping.py b/lnst/Tests/Ping.py index aacad05..039cb7e 100644 --- a/lnst/Tests/Ping.py +++ b/lnst/Tests/Ping.py @@ -2,9 +2,9 @@ import re import logging import subprocess from lnst.Common.Parameters import IntParam, FloatParam, IpParam, DeviceOrIpParam -from lnst.Common.TestModule import BaseTestModule, TestModuleError from lnst.Common.ExecCmd import exec_cmd from lnst.Common.Utils import is_installed +from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError
class Ping(BaseTestModule): """Port of old IcmpPing test modules"""
From: Ondrej Lichtner olichtne@redhat.com
The Parameters class serves as a container for instances of the various Param objects. It should support deletions when required.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Parameters.py | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/lnst/Common/Parameters.py b/lnst/Common/Parameters.py index 66380d2..7cfc451 100644 --- a/lnst/Common/Parameters.py +++ b/lnst/Common/Parameters.py @@ -132,6 +132,9 @@ class Parameters(object): else: self._attrs[name] = val
+ def __delattr__(self, name): + del self._attrs[name] + def __contains__(self, name): return name in self._attrs
From: Ondrej Lichtner olichtne@redhat.com
When formatting the match description we sort the machines, we should also sort the interfaces to improve the readability a little...
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/MachineMapper.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lnst/Controller/MachineMapper.py b/lnst/Controller/MachineMapper.py index 13488eb..2bc6ade 100644 --- a/lnst/Controller/MachineMapper.py +++ b/lnst/Controller/MachineMapper.py @@ -23,7 +23,7 @@ def format_match_description(match): output.append(" Setup is using virtual machines.") for m_id, m in sorted(match["machines"].iteritems()): output.append(" host "{}" uses "{}"".format(m_id, m["target"])) - for if_id, match in m["interfaces"].iteritems(): + for if_id, match in sorted(m["interfaces"].iteritems()): pool_id = match["target"] output.append(" interface "{}" matched to "{}"". format(if_id, pool_id))
From: Ondrej Lichtner olichtne@redhat.com
This boolean attribute signifies if the specific Machine object is mapped and in use by the currently executed recipe. This is significant for the message dispatcher when ignoring disconnects for slaves that are not used for the recipe.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Controller.py | 2 ++ lnst/Controller/Machine.py | 7 +++++++ 2 files changed, 9 insertions(+)
diff --git a/lnst/Controller/Controller.py b/lnst/Controller/Controller.py index f306319..00b15d9 100644 --- a/lnst/Controller/Controller.py +++ b/lnst/Controller/Controller.py @@ -155,6 +155,7 @@ class Controller(object):
machine.set_id(m_id) self._prepare_machine(machine, recipe) + machine.set_mapped(True)
for if_id, i in m["interfaces"].items(): host._map_device(if_id, i) @@ -183,6 +184,7 @@ class Controller(object): machine.cleanup() #clean-up slave logger self._log_ctl.remove_slave(m_id) + machine.set_mapped(False)
self._machines.clear()
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index 11055c8..1265265 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -49,6 +49,7 @@ class Machine(object): libvirt_domain=None, rpcport=None, security=None): self._id = m_id self._hostname = hostname + self._mapped = False self._ctl_config = ctl_config self._slave_desc = None self._connection = None @@ -97,6 +98,12 @@ class Machine(object): def get_id(self): return self._id
+ def set_mapped(self, new_value): + self._mapped = new_value + + def get_mapped(self): + return self._mapped + def get_configuration(self): configuration = {} configuration["id"] = self._id
From: Ondrej Lichtner olichtne@redhat.com
This commit refactors the wait_for_condition method method to improve maintainability and readability. It also removes the unused wait_for_job_finish method and unifies handling of machine disconnects. Timeout handling is now part of this method instead of in the Machine class where it doesn't make as much sense...
If a slave that isn't mapped (and is therefore not used by the currently running recipe) the information is logged but the application doesn't die with an exception anymore.
This includes a refactoring of the wait_for_* methods of the Machine class, that now just delegates this functionality to the message dispatcher.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Machine.py | 67 +++++----------- lnst/Controller/MessageDispatcher.py | 114 +++++++++++++++++---------- 2 files changed, 90 insertions(+), 91 deletions(-)
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index 1265265..32d933f 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -330,10 +330,6 @@ class Machine(object): self.cleanup_devices() raise
- def _timeout_handler(self, signum, frame): - msg = "Timeout expired on machine %s" % self.get_id() - raise MachineError(msg) - def _get_base_classes(self, cls): new_bases = [cls] + list(cls.__bases__) bases = [] @@ -379,59 +375,34 @@ class Machine(object): return res
def wait_for_job(self, job, timeout): - res = True if job.id not in self._jobs: raise MachineError("No job '%s' running on Machine %s" % (job.id, self._id))
- prev_handler = signal.signal(signal.SIGALRM, self._timeout_handler) - signal.alarm(timeout) - - try: - if timeout > 0: - logging.info("Waiting for Job %d on Host %s for %d seconds." % - (job.id, self._id, timeout)) - elif timeout == 0: - logging.info("Waiting for Job %d on Host %s." % - (job.id, self._id)) + if timeout > 0: + logging.info("Waiting for Job %d on Host %s for %d seconds." % + (job.id, self._id, timeout)) + elif timeout == 0: + logging.info("Waiting for Job %d on Host %s." % + (job.id, self._id))
- def condition(): - return job.finished + def condition(): + return job.finished
- self._msg_dispatcher.wait_for_condition(condition) - except MachineError as exc: - logging.error(str(exc)) - res = False - - signal.alarm(0) - signal.signal(signal.SIGALRM, prev_handler) - - return res + return self._msg_dispatcher.wait_for_condition(condition, timeout)
def wait_for_tmp_devices(self, timeout): - res = False - prev_handler = signal.signal(signal.SIGALRM, self._timeout_handler) - signal.alarm(timeout) + if timeout > 0: + logging.info("Waiting for Device creation Host %s for %d seconds." % + (self._id, timeout)) + elif timeout == 0: + logging.info("Waiting for Device creation on Host %s." % + (self._id))
- try: - if timeout > 0: - logging.info("Waiting for Device creation Host %s for %d seconds." % - (self._id, timeout)) - elif timeout == 0: - logging.info("Waiting for Device creation on Host %s." % - (self._id)) - - def condition(): - return len(self._tmp_device_database) <= 0 - - self._msg_dispatcher.wait_for_condition(condition) - except MachineError as exc: - logging.error(str(exc)) - res = False - - signal.alarm(0) - signal.signal(signal.SIGALRM, prev_handler) - return res + def condition(): + return len(self._tmp_device_database) <= 0 + + return self._msg_dispatcher.wait_for_condition(condition, timeout)
def job_finished(self, msg): job_id = msg["job_id"] diff --git a/lnst/Controller/MessageDispatcher.py b/lnst/Controller/MessageDispatcher.py index 6b12d68..222bc80 100644 --- a/lnst/Controller/MessageDispatcher.py +++ b/lnst/Controller/MessageDispatcher.py @@ -16,6 +16,7 @@ olichtne@redhat.com (Ondrej Lichtner)
import logging import copy +import signal from lnst.Common.ConnectionHandler import send_data from lnst.Common.ConnectionHandler import ConnectionHandler from lnst.Common.Parameters import Parameters, DeviceParam @@ -81,6 +82,13 @@ def remote_device_to_deviceref(obj): class ConnectionError(ControllerError): pass
+class WaitTimeoutError(ControllerError): + pass + +def _timeout_handler(signum, frame): + msg = "Timeout expired" + raise WaitTimeoutError(msg) + class MessageDispatcher(ConnectionHandler): def __init__(self, log_ctl): super(MessageDispatcher, self).__init__() @@ -104,9 +112,6 @@ class MessageDispatcher(ConnectionHandler): connected_slaves = self._connection_mapping.keys()
messages = self.check_connections() - - remaining_slaves = self._connection_mapping.keys() - for msg in messages: if msg[1]["type"] == "result" and msg[0] == machine: if result is not None: @@ -117,61 +122,69 @@ class MessageDispatcher(ConnectionHandler): else: self._process_message(msg)
+ remaining_slaves = self._connection_mapping.keys() if connected_slaves != remaining_slaves: - disconnected_slaves = set(connected_slaves) -\ - set(remaining_slaves) - msg = "Slaves " + str(list(disconnected_slaves)) + \ - " disconnected from the controller." - raise ConnectionError(msg) + self._handle_disconnects(set(connected_slaves)- + set(remaining_slaves))
if result is not None: return deviceref_to_remote_device(machine, result["result"])
- def wait_for_job_finish(self, job): - def condition_check(): - return job.finished - self.wait_for_condition(condition_check) - return True - - def wait_for_condition(self, condition_check): - wait = True - while wait: - connected_slaves = self._connection_mapping.keys() - - messages = self.check_connections(timeout=1) - - remaining_slaves = self._connection_mapping.keys() - - for msg in messages: - self._process_message(msg) - wait = wait and not condition_check() - - wait = wait and not condition_check() - - if connected_slaves != remaining_slaves: - disconnected_slaves = set(connected_slaves) -\ - set(remaining_slaves) - msg = "Slaves " + str(list(disconnected_slaves)) + \ - " disconnected from the controller." - raise ConnectionError(msg) - return True + def wait_for_condition(self, condition_check, timeout=0): + res = True + prev_handler = signal.signal(signal.SIGALRM, _timeout_handler) + + def condition_wrapper(): + res = condition_check() + if res: + signal.alarm(0) + signal.signal(signal.SIGALRM, prev_handler) + logging.debug("Condition passed, disabling timeout alarm") + return res + + try: + signal.alarm(timeout) + + wait = True + while wait: + connected_slaves = self._connection_mapping.keys() + messages = self.check_connections(timeout=1) + for msg in messages: + try: + self._process_message(msg) + wait = wait and not condition_wrapper() + except WaitTimeoutError as exc: + logging.error("Waiting for condition timed out!") + res = False + wait = False + + wait = wait and not condition_wrapper() + + remaining_slaves = self._connection_mapping.keys() + if connected_slaves != remaining_slaves: + self._handle_disconnects(set(connected_slaves)- + set(remaining_slaves)) + except WaitTimeoutError as exc: + logging.error("Waiting for condition timed out!") + res = False + finally: + signal.alarm(0) + signal.signal(signal.SIGALRM, prev_handler) + + return res
def handle_messages(self): connected_slaves = self._connection_mapping.keys()
messages = self.check_connections()
- remaining_slaves = self._connection_mapping.keys() - for msg in messages: self._process_message(msg)
+ remaining_slaves = self._connection_mapping.keys() if connected_slaves != remaining_slaves: - disconnected_slaves = set(connected_slaves) -\ - set(remaining_slaves) - msg = "Slaves " + str(list(disconnected_slaves)) + \ - " disconnected from the controller." - raise ConnectionError(msg) + self._handle_disconnects(set(connected_slaves)- + set(remaining_slaves)) return True
def _process_message(self, message): @@ -196,6 +209,21 @@ class MessageDispatcher(ConnectionHandler): msg = "Unknown message type: %s" % message[1]["type"] raise ConnectionError(msg)
+ def _handle_disconnects(self, disconnected_slaves): + disconnected_slaves = set(disconnected_slaves) + for slave in list(disconnected_slaves): + if not slave.get_mapped(): + logging.warn("Slave {} soft-disconnected from the " + "controller.".format(slave.get_id())) + disconnected_slaves.remove(slave) + + if len(disconnected_slaves) > 0: + disconnected_names = [x.get_id() + for x in disconnected_slaves] + msg = "Slaves " + str(list(disconnected_names)) + \ + " hard-disconnected from the controller." + raise ConnectionError(msg) + def disconnect_slave(self, machine): soc = self.get_connection(machine) self.remove_connection(soc)
From: Ondrej Lichtner olichtne@redhat.com
The init_connection method is now public and the Machine object doesn't initialize it's own connection.
Instead the connection is explicitly initialized by the SlavePoolManager after the creation of the Machine object. This will later be used to expose an API to the tester that will enable him to connect to slave machines during test execution.
The method also gained a timeout parameter to limit the socket creation duration when relevant.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Machine.py | 7 +++---- lnst/Controller/SlavePoolManager.py | 1 + 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index 32d933f..bdfbc1e 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -87,8 +87,6 @@ class Machine(object):
self._initns = None
- self._init_connection() - def set_id(self, new_id): self._id = new_id
@@ -201,7 +199,7 @@ class Machine(object):
return self._msg_dispatcher.send_message(self, msg)
- def _init_connection(self): + def init_connection(self, timeout=None): """ Initialize the slave connection
This will connect to the Slave, get it's description (should be @@ -212,7 +210,8 @@ class Machine(object): m_id = self._id
logging.info("Connecting to RPC on machine %s (%s)", m_id, hostname) - connection = CtlSecSocket(socket.create_connection((hostname, port))) + connection = CtlSecSocket(socket.create_connection((hostname, port), + timeout)) connection.handshake(self._security)
self._msg_dispatcher.add_slave(self, connection) diff --git a/lnst/Controller/SlavePoolManager.py b/lnst/Controller/SlavePoolManager.py index ea1ade9..eb6ae52 100644 --- a/lnst/Controller/SlavePoolManager.py +++ b/lnst/Controller/SlavePoolManager.py @@ -74,6 +74,7 @@ class SlavePoolManager(object): pool[m_id] = Machine(m_id, hostname, self._msg_dispatcher, ctl_config, libvirt_domain, rpc_port, m_spec["security"]) + pool[m_id].init_connection() #TODO check if all described devices are available
logging.info("Finished loading pools.")
From: Ondrej Lichtner olichtne@redhat.com
Previously we only allowed virtual machines without interfaces since they would later get dynamically created devices. However now we want testers to be able to connect to slaves even during test execution and to be able to manipulate nic device mapping manually so this restriction is now useless..
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/SlavePoolManager.py | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/lnst/Controller/SlavePoolManager.py b/lnst/Controller/SlavePoolManager.py index eb6ae52..e8ff4a6 100644 --- a/lnst/Controller/SlavePoolManager.py +++ b/lnst/Controller/SlavePoolManager.py @@ -262,12 +262,6 @@ class SlavePoolManager(object): raise PoolManagerError(msg, iface)
machine_spec["interfaces"][if_id] = iface_spec - else: - if "libvirt_domain" not in machine_spec["params"]: - msg = "Machine '%s' has no testing interfaces. " \ - "This setup is supported only for virtual slaves." \ - % m_id - raise PoolManagerError(msg, machine_xml_data)
machine_spec["security"] = machine_xml_data["security"]
From: Ondrej Lichtner olichtne@redhat.com
Splitting the old set_recipe slave method into two smaller prepare_machine and start_recipe methods that have the same end result but are required to be done separately for slaves that a tester connects to during test execution.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Controller.py | 8 +++++--- lnst/Controller/Machine.py | 18 +++++++++--------- lnst/Slave/NetTestSlave.py | 4 +++- 3 files changed, 17 insertions(+), 13 deletions(-)
diff --git a/lnst/Controller/Controller.py b/lnst/Controller/Controller.py index 00b15d9..d1ccf61 100644 --- a/lnst/Controller/Controller.py +++ b/lnst/Controller/Controller.py @@ -154,8 +154,8 @@ class Controller(object): host = getattr(self._hosts, m_id)
machine.set_id(m_id) - self._prepare_machine(machine, recipe) machine.set_mapped(True) + self._prepare_machine(machine)
for if_id, i in m["interfaces"].items(): host._map_device(if_id, i) @@ -169,12 +169,14 @@ class Controller(object): setattr(host, name, new_virt_dev) new_virt_dev._enable()
- def _prepare_machine(self, machine, recipe): + machine.start_recipe(recipe) + + def _prepare_machine(self, machine): self._log_ctl.add_slave(machine.get_id()) machine.set_mac_pool(self._mac_pool) machine.set_network_bridges(self._network_bridges)
- machine.set_recipe(recipe) + machine.prepare_machine()
def _cleanup_slaves(self): if self._machines == None: diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index bdfbc1e..042d8e2 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -240,18 +240,13 @@ class Machine(object):
self._slave_desc = slave_desc
- def set_recipe(self, recipe): - """ Reserves the machine for the specified recipe - - Also sends Device classes from the controller and initializes the - InterfaceManager on the Slave and builds the device database. - """ - self._recipe = recipe - recipe_name = recipe.__class__.__name__ - self.rpc_call("set_recipe", recipe_name) + def prepare_machine(self): + self.rpc_call("prepare_machine") self._send_device_classes() self.rpc_call("init_if_manager")
+ self._device_database = {} + devices = self.rpc_call("get_devices") for ifindex, dev in devices.items(): remote_dev = RemoteDevice(Device) @@ -261,6 +256,11 @@ class Machine(object):
self._device_database[ifindex] = remote_dev
+ def start_recipe(self, recipe): + self._recipe = recipe + recipe_name = recipe.__class__.__name__ + self.rpc_call("start_recipe", recipe_name) + def _send_device_classes(self): classes = [] for cls_name, cls in device_classes: diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index f98db6c..75c194b 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -108,13 +108,15 @@ class SlaveMethods:
return ("hello", slave_desc)
- def set_recipe(self, recipe_name): + def prepare_machine(self): self.machine_cleanup() self.restore_nm_option()
self._cache.del_old_entries() self.reset_file_transfers() + return True
+ def start_recipe(self, recipe_name): date = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S") self._log_ctl.set_recipe(recipe_name, expand=date) sleep(1)
From: Ondrej Lichtner olichtne@redhat.com
VirtualDevices are created by the Controller so they should also be cleaned up by the Controller.
Also added exception handling to the Controller machine cleanup method - the exception must be logged and should be reported as recipe failure but shouldn't stop the cleanup of the other slave machines.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Controller.py | 17 +++++++++++++---- lnst/Controller/Machine.py | 5 ----- 2 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/lnst/Controller/Controller.py b/lnst/Controller/Controller.py index d1ccf61..cdce799 100644 --- a/lnst/Controller/Controller.py +++ b/lnst/Controller/Controller.py @@ -183,10 +183,19 @@ class Controller(object): return
for m_id, machine in self._machines.iteritems(): - machine.cleanup() - #clean-up slave logger - self._log_ctl.remove_slave(m_id) - machine.set_mapped(False) + try: + machine.cleanup() + except: + #TODO report errors during deconfiguration as FAIL!! + log_exc_traceback() + finally: + for dev in machine._device_database.values(): + if isinstance(dev, VirtualDevice): + dev._destroy() + + #clean-up slave logger + self._log_ctl.remove_slave(m_id) + machine.set_mapped(False)
self._machines.clear()
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index 042d8e2..a604ede 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -295,11 +295,6 @@ class Machine(object): self.rpc_call("destroy_devices", netns=netns) self.rpc_call("destroy_devices")
- for dev in self._device_database.values(): - if isinstance(dev, VirtualDevice): - dev._destroy() - self._device_database = {} - def cleanup(self): """ Clean the machine up
From: Ondrej Lichtner olichtne@redhat.com
Unified the different methods that implemented their own "send to slave" code to use a common send_class method that does it for them.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Machine.py | 41 ++++++++++++++------------------------ 1 file changed, 15 insertions(+), 26 deletions(-)
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index a604ede..e1204e1 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -24,8 +24,6 @@ from lnst.Controller.RecipeResults import JobStartResult, JobFinishResult from lnst.Devices import device_classes from lnst.Devices.Device import Device from lnst.Devices.RemoteDevice import RemoteDevice -from lnst.Devices.VirtualDevice import VirtualDevice -from lnst.Tests.BaseTestModule import BaseTestModule
# conditional support for libvirt if check_process_running("libvirtd"): @@ -262,14 +260,23 @@ class Machine(object): self.rpc_call("start_recipe", recipe_name)
def _send_device_classes(self): - classes = [] for cls_name, cls in device_classes: - classes.extend(reversed(self._get_base_classes(cls))) + self.send_class(cls)
- for cls in classes: - if cls is object: - continue + for cls_name, cls in device_classes: + module_name = cls.__module__ + self.rpc_call("map_device_class", cls_name, module_name) + + def send_class(self, cls): + classes = [cls] + classes.extend(self._get_base_classes(cls)) + + for cls in reversed(classes): module_name = cls.__module__ + + if module_name == "__builtin__": + continue + module = sys.modules[module_name] filename = module.__file__
@@ -279,10 +286,6 @@ class Machine(object): res_hash = self.sync_resource(module_name, filename) self.rpc_call("load_cached_module", module_name, res_hash)
- for cls_name, cls in device_classes: - module_name = cls.__module__ - self.rpc_call("map_device_class", cls_name, module_name) - def is_git_version(self, version): try: int(version) @@ -342,21 +345,7 @@ class Machine(object): self._jobs[job.id] = job
if job._type == "module": - classes = [job._what] - classes.extend(self._get_base_classes(job._what.__class__)) - - for cls in reversed(classes): - if cls is object or cls is BaseTestModule: - continue - m_name = cls.__module__ - m = sys.modules[m_name] - filename = m.__file__ - if filename[-3:] == "pyc": - filename = filename[:-1] - - res_hash = self.sync_resource(m_name, filename) - - self.rpc_call("load_cached_module", m_name, res_hash) + self.send_class(job._what.__class__)
logging.info("Host %s executing job %d: %s" % (self._id, job.id, str(job)))
From: Ondrej Lichtner olichtne@redhat.com
To prepare for tracking multiple types of classes dynamically received from the controller, the _dynamic_classes dictionary should track them with their full path names (including module name) to prevent conflicts.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/NetTestSlave.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index 75c194b..efacd58 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -152,7 +152,7 @@ class SlaveMethods: module = self._dynamic_modules[module_name] cls = getattr(module, cls_name)
- self._dynamic_classes[cls_name] = cls + self._dynamic_classes["{}.{}".format(module_name, cls_name)] = cls
setattr(Devices, cls_name, cls)
From: Ondrej Lichtner olichtne@redhat.com
Adding a second dynamic module (Devices being the first one) that can accept classes sent from the Controller. Eventually I'm guessing this will be a refactored later so we're not limited to just Devices and RecipesCommon but for now I'd like to keep it limited to just these two.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/NetTestSlave.py | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index efacd58..03c94f3 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -63,6 +63,11 @@ Tests.__path__ = ["lnst.Tests"]
sys.modules["lnst.Tests"] = Tests
+RecipeCommon = types.ModuleType("RecipeCommon") +RecipeCommon.__path__ = ["lnst.RecipeCommon"] + +sys.modules["lnst.RecipeCommon"] = RecipeCommon + class SlaveMethods: ''' Exported xmlrpc methods
From: Ondrej Lichtner olichtne@redhat.com
This gives the API to instantiate objects from any dynamically received class from the controller and then access it's attributes and methods.
For now this can only be used for RecipeCommon classes since it's the only module (other than Devices) that the Slave can receive classes for. But this might get extended later.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/NetTestSlave.py | 52 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 50 insertions(+), 2 deletions(-)
diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index 03c94f3..f41091c 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -93,6 +93,7 @@ class SlaveMethods:
self._dynamic_modules = {} self._dynamic_classes = {} + self._dynamic_objects = {}
self._bkp_nm_opt_val = slave_config.get_option("environment", "use_nm")
@@ -169,6 +170,16 @@ class SlaveMethods: module = imp.load_source(module_name, module_path) self._dynamic_modules[module_name] = module
+ def init_cls(self, cls_name, module_name, args, kwargs): + module = self._dynamic_modules[module_name] + cls = getattr(module, cls_name) + + self._dynamic_classes["{}.{}".format(module_name, cls_name)] = cls + + new_obj = cls(*args, **kwargs) + self._dynamic_objects[id(new_obj)] = new_obj + return id(new_obj) + def init_if_manager(self): self._if_manager = InterfaceManager(self._server_handler) for cls_name in dir(Devices): @@ -180,6 +191,37 @@ class SlaveMethods: self._server_handler.set_if_manager(self._if_manager) return True
+ def obj_method(self, obj_ref, name, args, kwargs): + try: + obj = self._dynamic_objects[obj_ref] + method = getattr(obj, name) + return method(*args, **kwargs) + except LnstError: + raise + except Exception as exc: + log_exc_traceback() + raise LnstError(exc) + + def obj_getattr(self, obj_ref, name): + try: + obj = self._dynamic_objects[obj_ref] + return getattr(obj, name) + except LnstError: + raise + except Exception as exc: + log_exc_traceback() + raise LnstError(exc) + + def obj_setattr(self, obj_ref, name, value): + try: + obj = self._dynamic_objects[obj_ref] + return setattr(obj, name, value) + except LnstError: + raise + except Exception as exc: + log_exc_traceback() + raise LnstError(exc) + def dev_method(self, ifindex, name, args, kwargs): dev = self._if_manager.get_device(ifindex) method = getattr(dev, name) @@ -403,12 +445,18 @@ class SlaveMethods: self.del_namespace(netns) self._net_namespaces = {}
- for cls_name, cls in self._dynamic_classes.items(): - delattr(Devices, cls_name) + for obj_id, obj in self._dynamic_objects.items(): + del obj + + for cls_name in dir(Devices): + cls = getattr(Devices, cls_name) + if isclass(cls): + delattr(Devices, cls_name)
for module_name, module in self._dynamic_modules.items(): del sys.modules[module_name]
+ self._dynamic_objects = {} self._dynamic_classes = {} self._dynamic_modules = {} self._if_manager = None
From: Ondrej Lichtner olichtne@redhat.com
This is a universal class that can be used as a proxy for any dynamically instantiated object on the slave (that was received from the Controller).
The commit also adds the init_remote_class method to lnst.Controller.Machine and init_class to lnst.Controller.Host to expose this functionality to the tester.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Host.py | 5 +++++ lnst/Controller/Machine.py | 8 +++++++ lnst/Controller/SlaveObject.py | 41 ++++++++++++++++++++++++++++++++++ 3 files changed, 54 insertions(+) create mode 100644 lnst/Controller/SlaveObject.py
diff --git a/lnst/Controller/Host.py b/lnst/Controller/Host.py index 9267081..f455fa9 100644 --- a/lnst/Controller/Host.py +++ b/lnst/Controller/Host.py @@ -76,3 +76,8 @@ class Host(Namespace): return False else: return True + + def init_class(self, cls, *args, **kwargs): + self._machine.send_class(cls) + + return self._machine.init_remote_class(cls, *args, **kwargs) diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index e1204e1..2d58e67 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -21,6 +21,7 @@ from lnst.Common.Version import lnst_version from lnst.Controller.Common import ControllerError from lnst.Controller.CtlSecSocket import CtlSecSocket from lnst.Controller.RecipeResults import JobStartResult, JobFinishResult +from lnst.Controller.SlaveObject import SlaveObject from lnst.Devices import device_classes from lnst.Devices.Device import Device from lnst.Devices.RemoteDevice import RemoteDevice @@ -506,6 +507,13 @@ class Machine(object): "file", remote_path, res_name) return digest
+ def init_remote_class(self, cls, *args, **kwargs): + module_name = cls.__module__ + cls_name = cls.__name__ + obj_ref = self.rpc_call("init_cls", cls_name, module_name, args, kwargs) + + return SlaveObject(self, cls, obj_ref) + # def enable_nm(self): # return self._rpc_call("enable_nm")
diff --git a/lnst/Controller/SlaveObject.py b/lnst/Controller/SlaveObject.py new file mode 100644 index 0000000..7576c04 --- /dev/null +++ b/lnst/Controller/SlaveObject.py @@ -0,0 +1,41 @@ +""" +TODO + +Copyright 2018 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +class SlaveObject(object): + def __init__(self, machine, cls, obj_ref): + self._inited = False + self.__cls = cls + self.__obj_ref = obj_ref + self.__machine = machine + + self._inited = True + + def __getattr__(self, name): + if name == "_inited": + return super(SlaveObject, self).__getattribute__(name) + + attr = getattr(self.__cls, name) + + if callable(attr): + def obj_method(*args, **kwargs): + return self.__machine.rpc_call("obj_method", self.__obj_ref, + name, args, kwargs) + return obj_method + else: + return self.__machine.rpc_call("obj_getattr", self.__obj_ref, name) + + def __setattr__(self, name, value): + if name == "_inited" or not self._inited: + return super(SlaveObject, self).__setattr__(name, value) + + return self._machine.rpc_call("obj_setattr", self.__obj_ref, name, + value)
From: Ondrej Lichtner olichtne@redhat.com
Splitting off the Recipe.ctl tester API away from the lnst.Controller.Controller class into it's own dedicated RecipeControl class. This should be more readable and maintainable hopefully and avoid accidentaly exposing something we didn't want to.
Currently the API provides two wait* methods, access to the Hosts iterator and the ability to initialized a connection to a slave during test execution. This is a new feature.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Controller.py | 14 ++----- lnst/Controller/Recipe.py | 2 +- lnst/Controller/RecipeControl.py | 64 ++++++++++++++++++++++++++++++++ 3 files changed, 68 insertions(+), 12 deletions(-) create mode 100644 lnst/Controller/RecipeControl.py
diff --git a/lnst/Controller/Controller.py b/lnst/Controller/Controller.py index cdce799..e087ef3 100644 --- a/lnst/Controller/Controller.py +++ b/lnst/Controller/Controller.py @@ -30,6 +30,7 @@ from lnst.Controller.MachineMapper import MachineMapper from lnst.Controller.MachineMapper import format_match_description from lnst.Controller.Host import Hosts, Host from lnst.Controller.Recipe import BaseRecipe, RecipeRun +from lnst.Controller.RecipeControl import RecipeControl
class Controller(object): """The LNST Controller class @@ -106,7 +107,8 @@ class Controller(object): if not isinstance(recipe, BaseRecipe): raise ControllerError("recipe argument must be a BaseRecipe instance.")
- recipe._set_ctl(self) + recipe_ctl = RecipeControl(self, recipe) + recipe._set_ctl(recipe_ctl)
req = recipe.req
@@ -132,16 +134,6 @@ class Controller(object): finally: self._cleanup_slaves()
- def wait(self, sec): - finish_time = time.time() + sec - logging.info("Suspending recipe execution for {} seconds, " - "messages from slaves will still be processed.". - format(sec)) - - def condition(): - return time.time() > finish_time - - self._msg_dispatcher.wait_for_condition(condition)
def _map_match(self, match, requested, recipe): self._machines = {} diff --git a/lnst/Controller/Recipe.py b/lnst/Controller/Recipe.py index 0f3a54e..ddd8016 100644 --- a/lnst/Controller/Recipe.py +++ b/lnst/Controller/Recipe.py @@ -118,7 +118,7 @@ class BaseRecipe(object): def matched(self): if self.ctl is None: return None - return self.ctl._hosts + return self.ctl.hosts
def test(self): """Method to be implemented by the Tester""" diff --git a/lnst/Controller/RecipeControl.py b/lnst/Controller/RecipeControl.py new file mode 100644 index 0000000..b92b978 --- /dev/null +++ b/lnst/Controller/RecipeControl.py @@ -0,0 +1,64 @@ +import time +import socket +import logging +from lnst.Common.Logs import log_exc_traceback +from lnst.Common.SecureSocket import SecSocketException +from lnst.Controller.Machine import Machine +from lnst.Controller.Host import Host + +class RecipeControl(object): + def __init__(self, controller, recipe): + self._controller = controller + self._recipe = recipe + + @property + def hosts(self): + return self._controller._hosts + + def wait(self, sec): + finish_time = time.time() + sec + logging.info("Suspending recipe execution for {} seconds, " + "messages from slaves will still be processed.". + format(sec)) + + def condition(): + return time.time() > finish_time + + msg_dispatcher = self._controller._msg_dispatcher + msg_dispatcher.wait_for_condition(condition) + + def wait_for_condition(self, condition): + #TODO add descriptions to conditions? + logging.info("Suspending recipe execution until condition is true") + + msg_dispatcher = self._controller._msg_dispatcher + msg_dispatcher.wait_for_condition(condition) + + def connect_host(self, hostname, timeout=60, port=None, machine_id=None, + security=None): + ctl_config = self._controller._config + msg_dispatcher = self._controller._msg_dispatcher + + if security is None: + security = {"auth_type": "none"} + + if machine_id is None: + machine_id = hostname + + m = Machine(machine_id, hostname, msg_dispatcher, + ctl_config, None, port, security) + + def condition(): + try: + m.init_connection(timeout=1) + return True + except: + log_exc_traceback() + return False + + msg_dispatcher.wait_for_condition(condition, timeout) + + self._controller._prepare_machine(m) + m.start_recipe(self._recipe) + host = Host(m) + return host
From: Ondrej Lichtner olichtne@redhat.com
Now that the tester can connect to slaves during test execution, it should also be possible to map devices manually.
Though this is just a work in progress method and it's precise parameters will likely change to provide a much better experience compared to searching just by mac address.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Controller.py | 2 +- lnst/Controller/Host.py | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/lnst/Controller/Controller.py b/lnst/Controller/Controller.py index e087ef3..3e5473f 100644 --- a/lnst/Controller/Controller.py +++ b/lnst/Controller/Controller.py @@ -150,7 +150,7 @@ class Controller(object): self._prepare_machine(machine)
for if_id, i in m["interfaces"].items(): - host._map_device(if_id, i) + host.map_device(if_id, i)
if match["virtual"]: req_host = getattr(requested, m_id) diff --git a/lnst/Controller/Host.py b/lnst/Controller/Host.py index f455fa9..a77b8f6 100644 --- a/lnst/Controller/Host.py +++ b/lnst/Controller/Host.py @@ -55,7 +55,8 @@ class Host(Namespace): ret.append(x) return ret
- def _map_device(self, dev_id, how): + def map_device(self, dev_id, how): + #TODO if this is supposed to be public it should be better than dict["hwaddr"]!!!! hwaddr = how["hwaddr"] dev = self._machine.get_dev_by_hwaddr(hwaddr) if dev:
From: Ondrej Lichtner olichtne@redhat.com
RecipeParam class is a Param class that points (by name) to a different Param object.
This can be used to parametrize the Requirements of a Recipe using it's parameters, e.g. driver value.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Recipe.py | 11 +++++--- lnst/Controller/Requirements.py | 47 ++++++++++++++++++++++++++++++++- lnst/Controller/__init__.py | 2 +- 3 files changed, 55 insertions(+), 5 deletions(-)
diff --git a/lnst/Controller/Recipe.py b/lnst/Controller/Recipe.py index ddd8016..080dd46 100644 --- a/lnst/Controller/Recipe.py +++ b/lnst/Controller/Recipe.py @@ -87,9 +87,7 @@ class BaseRecipe(object): self.params = Parameters() for attr in dir(self): val = getattr(self, attr) - if isinstance(val, HostReq): - setattr(self.req, attr, copy.deepcopy(val)) - elif isinstance(val, Param): + if isinstance(val, Param): if attr in kwargs: param_val = kwargs.pop(attr) param_val = val.type_check(param_val) @@ -103,6 +101,13 @@ class BaseRecipe(object): raise RecipeError("Parameter {} is mandatory" .format(attr))
+ for attr in dir(self): + val = getattr(self, attr) + if isinstance(val, HostReq): + new_val = copy.deepcopy(val) + new_val.reinit_with_params(self.params) + setattr(self.req, attr, new_val) + if len(kwargs): for key in kwargs.keys(): raise RecipeError("Unknown parameter {}".format(key)) diff --git a/lnst/Controller/Requirements.py b/lnst/Controller/Requirements.py index 1d2ea6f..1337df0 100644 --- a/lnst/Controller/Requirements.py +++ b/lnst/Controller/Requirements.py @@ -20,12 +20,18 @@ __author__ = """ olichtne@redhat.com (Ondrej Lichtner) """
+import copy from lnst.Common.LnstError import LnstError -from lnst.Common.Parameters import Parameters +from lnst.Common.Parameters import Parameters, Param
class RequirementError(LnstError): pass
+class RecipeParam(Param): + def __init__(self, name, mandatory=False, **kwargs): + self.name = name + super(RecipeParam, self).__init__(mandatory, **kwargs) + class HostReq(object): """Specifies a Slave machine requirement
@@ -48,6 +54,27 @@ class HostReq(object): raise RequirementError("'params' is a reserved keyword.") setattr(self.params, name, val)
+ def reinit_with_params(self, recipe_params): + for name, val in self.params: + if isinstance(val, RecipeParam): + if val.name in recipe_params: + new_val = getattr(recipe_params, val.name) + setattr(self.params, name, new_val) + else: + try: + new_val = copy.deepcopy(val.default) + setattr(self.params, name, new_val) + except AttributeError: + if val.mandatory: + raise RequirementError( + "Recipe parameter {} is mandatory for Recipe Requirements parameter {}" + .format(val.name, name)) + else: + delattr(self.params, name) + + for name, dev_req in self: + dev_req.reinit_with_params(recipe_params) + def __iter__(self): for x in dir(self): val = getattr(self, x) @@ -86,6 +113,24 @@ class DeviceReq(object): raise RequirementError("'params' is a reserved keyword.") setattr(self.params, name, val)
+ def reinit_with_params(self, recipe_params): + for name, val in self.params: + if isinstance(val, RecipeParam): + if val.name in recipe_params: + new_val = getattr(recipe_params, val.name) + setattr(self.params, name, new_val) + else: + try: + new_val = copy.deepcopy(val.default) + setattr(self.params, name, new_val) + except AttributeError: + if val.mandatory: + raise RequirementError( + "Recipe parameter {} is mandatory for Recipe Requirements parameter {}" + .format(val.name, name)) + else: + delattr(self.params, name) + def _to_dict(self): res = {'network': self.label, 'params': self.params._to_dict()} diff --git a/lnst/Controller/__init__.py b/lnst/Controller/__init__.py index 2dbb60f..106bb91 100644 --- a/lnst/Controller/__init__.py +++ b/lnst/Controller/__init__.py @@ -1,4 +1,4 @@ from lnst.Controller.Controller import Controller from lnst.Controller.Recipe import BaseRecipe -from lnst.Controller.Requirements import HostReq, DeviceReq +from lnst.Controller.Requirements import HostReq, DeviceReq, RecipeParam from lnst.Controller.NetNamespace import NetNamespace
Mon, Aug 20, 2018 at 01:27:48PM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
RecipeParam class is a Param class that points (by name) to a different Param object.
This can be used to parametrize the Requirements of a Recipe using it's parameters, e.g. driver value.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/Controller/Recipe.py | 11 +++++--- lnst/Controller/Requirements.py | 47 ++++++++++++++++++++++++++++++++- lnst/Controller/__init__.py | 2 +- 3 files changed, 55 insertions(+), 5 deletions(-)
diff --git a/lnst/Controller/Recipe.py b/lnst/Controller/Recipe.py index ddd8016..080dd46 100644 --- a/lnst/Controller/Recipe.py +++ b/lnst/Controller/Recipe.py @@ -87,9 +87,7 @@ class BaseRecipe(object): self.params = Parameters() for attr in dir(self): val = getattr(self, attr)
if isinstance(val, HostReq):
setattr(self.req, attr, copy.deepcopy(val))
elif isinstance(val, Param):
if isinstance(val, Param): if attr in kwargs: param_val = kwargs.pop(attr) param_val = val.type_check(param_val)
@@ -103,6 +101,13 @@ class BaseRecipe(object): raise RecipeError("Parameter {} is mandatory" .format(attr))
for attr in dir(self):
val = getattr(self, attr)
if isinstance(val, HostReq):
new_val = copy.deepcopy(val)
new_val.reinit_with_params(self.params)
setattr(self.req, attr, new_val)
if len(kwargs): for key in kwargs.keys(): raise RecipeError("Unknown parameter {}".format(key))
diff --git a/lnst/Controller/Requirements.py b/lnst/Controller/Requirements.py index 1d2ea6f..1337df0 100644 --- a/lnst/Controller/Requirements.py +++ b/lnst/Controller/Requirements.py @@ -20,12 +20,18 @@ __author__ = """ olichtne@redhat.com (Ondrej Lichtner) """
+import copy from lnst.Common.LnstError import LnstError -from lnst.Common.Parameters import Parameters +from lnst.Common.Parameters import Parameters, Param
class RequirementError(LnstError): pass
+class RecipeParam(Param):
- def __init__(self, name, mandatory=False, **kwargs):
self.name = name
super(RecipeParam, self).__init__(mandatory, **kwargs)
class HostReq(object): """Specifies a Slave machine requirement
@@ -48,6 +54,27 @@ class HostReq(object): raise RequirementError("'params' is a reserved keyword.") setattr(self.params, name, val)
- def reinit_with_params(self, recipe_params):
for name, val in self.params:
if isinstance(val, RecipeParam):
if val.name in recipe_params:
new_val = getattr(recipe_params, val.name)
setattr(self.params, name, new_val)
else:
try:
new_val = copy.deepcopy(val.default)
setattr(self.params, name, new_val)
except AttributeError:
if val.mandatory:
raise RequirementError(
"Recipe parameter {} is mandatory for Recipe Requirements parameter {}"
.format(val.name, name))
else:
delattr(self.params, name)
for name, dev_req in self:
dev_req.reinit_with_params(recipe_params)
This looks the same as the code below. Could we have a GenericReq class that would have the reinit_with_params() and DeviceReq/HostReq would inherit it?
def __iter__(self): for x in dir(self): val = getattr(self, x)
@@ -86,6 +113,24 @@ class DeviceReq(object): raise RequirementError("'params' is a reserved keyword.") setattr(self.params, name, val)
- def reinit_with_params(self, recipe_params):
for name, val in self.params:
if isinstance(val, RecipeParam):
if val.name in recipe_params:
new_val = getattr(recipe_params, val.name)
setattr(self.params, name, new_val)
else:
try:
new_val = copy.deepcopy(val.default)
setattr(self.params, name, new_val)
except AttributeError:
if val.mandatory:
raise RequirementError(
"Recipe parameter {} is mandatory for Recipe Requirements parameter {}"
.format(val.name, name))
else:
delattr(self.params, name)
- def _to_dict(self): res = {'network': self.label, 'params': self.params._to_dict()}
diff --git a/lnst/Controller/__init__.py b/lnst/Controller/__init__.py index 2dbb60f..106bb91 100644 --- a/lnst/Controller/__init__.py +++ b/lnst/Controller/__init__.py @@ -1,4 +1,4 @@ from lnst.Controller.Controller import Controller from lnst.Controller.Recipe import BaseRecipe -from lnst.Controller.Requirements import HostReq, DeviceReq +from lnst.Controller.Requirements import HostReq, DeviceReq, RecipeParam from lnst.Controller.NetNamespace import NetNamespace -- 2.17.0 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/lnst-developers@lists.fedoraho...
From: Ondrej Lichtner olichtne@redhat.com
In case a list item is formatted to a single line, don't add it as an indented next line in the whole formatting but instead add it to the end of the current line.
This should shorten the output length and improve readability for result objects that output many short items in a list, e.g. Performance results that include manu small samples.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RunSummaryFormatter.py | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index 182b103..888b897 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -56,9 +56,17 @@ class RunSummaryFormatter(object): output.extend(nest_res) elif isinstance(data, list): for i, v in enumerate(data): - output.append("{pref}item {i}:".format(pref=level*prefix, - i=i)) - output.extend(self._format_data(v, level=level+1)) + formatted_v = self._format_data(v, level=level+1) + + if len(formatted_v) == 1: + output.append("{pref}item {i}: {value}".format( + pref=level*prefix, + i=i, + value=formatted_v[0].lstrip())) + else: + output.append("{pref}item {i}:".format( + pref=level*prefix, i=i)) + output.extend(formatted_v) else: for line in str(data).split('\n'): output.append("{pref}{val}".format(pref=level*prefix,
From: Ondrej Lichtner olichtne@redhat.com
Just a small edit for indentation and removing unused variables.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Machine.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index 2d58e67..6bc82b5 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -55,7 +55,7 @@ class Machine(object): self._system_config = {} self._security = security self._security["identity"] = ctl_config.get_option("security", - "identity") + "identity") self._security["privkey"] = ctl_config.get_option("security", "privkey")
@@ -166,14 +166,14 @@ class Machine(object):
def dev_db_get_name(self, dev_name): #TODO move these to Slave to optimize quering for each device - for ifindex, dev in self._device_database.iteritems(): + for dev in self._device_database.values(): if dev.get_name() == dev_name: return dev return None
def get_dev_by_hwaddr(self, hwaddr): #TODO move these to Slave to optimize quering for each device - for ifindex, dev in self._device_database.iteritems(): + for dev in self._device_database.values(): if dev.hwaddr == hwaddr: return dev return None
From: Ondrej Lichtner olichtne@redhat.com
This commit introduces two fixes for handling situations when a Device gets deleted. First is a small workaround for double removal of a deleted device from the tracking dictionary. The second is an Exception handling for calling an update on a Device object referring to a device that is already marked as deleted. This can sometimes happen when netlink messages get reordered so it's important to handle the exception.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/InterfaceManager.py | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/lnst/Slave/InterfaceManager.py b/lnst/Slave/InterfaceManager.py index 721434a..3801bdb 100644 --- a/lnst/Slave/InterfaceManager.py +++ b/lnst/Slave/InterfaceManager.py @@ -21,7 +21,7 @@ from lnst.Common.NetUtils import normalize_hwaddr from lnst.Common.NetUtils import scan_netdevs from lnst.Common.ExecCmd import exec_cmd from lnst.Common.ConnectionHandler import recv_data -from lnst.Common.DeviceError import DeviceNotFound, DeviceConfigError +from lnst.Common.DeviceError import DeviceNotFound, DeviceConfigError, DeviceDeleted from lnst.Common.InterfaceManagerError import InterfaceManagerError from lnst.Slave.DevlinkManager import DevlinkManager from pyroute2 import IPRSocket @@ -123,6 +123,12 @@ class InterfaceManager(object): for addr_msg in dev['ip_addrs']: self._devices[dev['index']]._update_netlink(addr_msg) for i in devices_to_remove: + if i not in self._devices: + #TODO + #this is a workaround fix for when the device to remove was + #already removed indirectly by the previous update loop + #the fix works for now but should be refactored at some point + continue dev_name = self._devices[i].name logging.debug("Deleting Device with ifindex %d, name %s because "\ "it doesn't exist anymore." % (i, dev_name)) @@ -153,7 +159,10 @@ class InterfaceManager(object): def _handle_netlink_msg(self, msg): if msg['header']['type'] in [RTM_NEWLINK, RTM_NEWADDR, RTM_DELADDR]: if msg['index'] in self._devices: - self._devices[msg['index']]._update_netlink(msg) + try: + self._devices[msg['index']]._update_netlink(msg) + except DeviceDeleted: + return elif msg['header']['type'] == RTM_NEWLINK: dev = self._device_classes["Device"](self) dev._init_netlink(msg)
From: Ondrej Lichtner olichtne@redhat.com
In our usecases we're interested in calculating the standard deviation of the individual averages of the one-level-lower measurements not the standard deviation of the absolute values of the one-level-lower measurements. This fixes the issue.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/PerfResult.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lnst/RecipeCommon/PerfResult.py b/lnst/RecipeCommon/PerfResult.py index a8d2245..9afac16 100644 --- a/lnst/RecipeCommon/PerfResult.py +++ b/lnst/RecipeCommon/PerfResult.py @@ -8,7 +8,7 @@ class PerfStatMixin(object):
@property def std_deviation(self): - return std_deviation([i.value for i in self]) + return std_deviation([i.average for i in self])
class PerfInterval(PerfStatMixin): def __init__(self, value, duration, unit):
From: Ondrej Lichtner olichtne@redhat.com
The standard deviation of a single interval measurement is 0 since it's the smallest measurement precision we have.
This also adds a __str__ method to pretty print the PerfInterval objects.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/PerfResult.py | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/lnst/RecipeCommon/PerfResult.py b/lnst/RecipeCommon/PerfResult.py index 9afac16..f48fd0a 100644 --- a/lnst/RecipeCommon/PerfResult.py +++ b/lnst/RecipeCommon/PerfResult.py @@ -28,6 +28,14 @@ class PerfInterval(PerfStatMixin): def unit(self): return self._unit
+ @property + def std_deviation(self): + return 0 + + def __str__(self): + return "{} {} in {} seconds".format( + self.value, self.unit, self.duration) + class PerfList(list): _sub_type = None
From: Ondrej Lichtner olichtne@redhat.com
Adding the count, interval and size parameters to the PingConf class. The Ping test module has default values defined for these but the tester should have the ability to override them when using the common PingTestAndEvaluate class.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/Ping.py | 35 ++++++++++++++++++++++++++++++++--- 1 file changed, 32 insertions(+), 3 deletions(-)
diff --git a/lnst/RecipeCommon/Ping.py b/lnst/RecipeCommon/Ping.py index 0f9f800..7599b66 100644 --- a/lnst/RecipeCommon/Ping.py +++ b/lnst/RecipeCommon/Ping.py @@ -4,11 +4,15 @@ from lnst.Tests import Ping class PingConf(object): def __init__(self, client, client_bind, - destination, destination_address): + destination, destination_address, + count=None, interval=None, size=None): self._client = client self._client_bind = client_bind self._destination = destination self._destination_address = destination_address + self._count = count + self._interval = interval + self._size = size
@property def client(self): @@ -26,13 +30,24 @@ class PingConf(object): def destination_address(self): return self._destination_address
+ @property + def count(self): + return self._count + + @property + def interval(self): + return self._interval + + @property + def size(self): + return self._size + class PingTestAndEvaluate(BaseRecipe): def ping_test(self, ping_config): client = ping_config.client destination = ping_config.destination
- ping = Ping(dst = ping_config.destination_address, - interface = ping_config.client_bind) + ping = Ping(self._generate_ping_kwargs(ping_config))
ping_job = client.run(ping) return ping_job.result @@ -43,3 +58,17 @@ class PingTestAndEvaluate(BaseRecipe): self.add_result(True, "Ping succesful", results) else: self.add_result(False, "Ping unsuccesful", results) + + def _generate_ping_kwargs(self, ping_config): + kwargs = dict(dst=ping_config.destination_address, + interface=ping_config.client_bind) + + if ping_config.count: + kwargs["count"] = ping_config.count + + if ping_config.interval: + kwargs["interval"] = ping_config.interval + + if ping_config.size: + kwargs["size"] = ping_config.size + return kwargs
From: Ondrej Lichtner olichtne@redhat.com
Renaming the client and server to generator and receiver to better illustrate the traffic flow used for performance measurement.
In case the measurement tool doesn't return measurements (either for the generator or the receiver) the MultiRunPerf class would complain about incompatible types (None) this was fixed by ignoring the None results.
Minor adjustment to the formating of the reporting result objects - added information about the size of the deviation to the average value in percentages.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/Perf.py | 90 +++++++++++++++-------------- lnst/Recipes/ENRT/BaseEnrtRecipe.py | 8 +-- 2 files changed, 52 insertions(+), 46 deletions(-)
diff --git a/lnst/RecipeCommon/Perf.py b/lnst/RecipeCommon/Perf.py index 49fa81f..97aa0f1 100644 --- a/lnst/RecipeCommon/Perf.py +++ b/lnst/RecipeCommon/Perf.py @@ -4,18 +4,18 @@ from lnst.RecipeCommon.PerfResult import MultiRunPerf class PerfConf(object): def __init__(self, perf_tool, - client, client_bind, - server, server_bind, test_type, + generator, generator_bind, + receiver, receiver_bind, msg_size, duration, iterations, streams): self._perf_tool = perf_tool - self._client = client - self._client_bind = client_bind - self._server = server - self._server_bind = server_bind - self._test_type = test_type
+ self._generator = generator + self._generator_bind = generator_bind + self._receiver = receiver + self._receiver_bind = receiver_bind + self._msg_size = msg_size self._duration = duration self._iterations = iterations @@ -26,20 +26,20 @@ class PerfConf(object): return self._perf_tool
@property - def client(self): - return self._client + def generator(self): + return self._generator
@property - def client_bind(self): - return self._client_bind + def generator_bind(self): + return self._generator_bind
@property - def server(self): - return self._server + def receiver(self): + return self._receiver
@property - def server_bind(self): - return self._server_bind + def receiver_bind(self): + return self._receiver_bind
@property def test_type(self): @@ -68,15 +68,17 @@ class PerfMeasurementTool(object):
class PerfTestAndEvaluate(BaseRecipe): def perf_test(self, perf_conf): - client_measurements = MultiRunPerf() - server_measurements = MultiRunPerf() + generator_measurements = MultiRunPerf() + receiver_measurements = MultiRunPerf() for i in range(perf_conf.iterations): - client, server = perf_conf.perf_tool.perf_measure(perf_conf) + tx, rx = perf_conf.perf_tool.perf_measure(perf_conf)
- client_measurements.append(client) - server_measurements.append(server) + if tx: + generator_measurements.append(tx) + if rx: + receiver_measurements.append(rx)
- return client_measurements, server_measurements + return generator_measurements, receiver_measurements
def perf_evaluate_and_report(self, perf_conf, results, baseline): self.perf_evaluate(perf_conf, results, baseline) @@ -84,31 +86,35 @@ class PerfTestAndEvaluate(BaseRecipe): self.perf_report(perf_conf, results, baseline)
def perf_evaluate(self, perf_conf, results, baseline): - client, server = results + generator, receiver = results
- if client.average > 0: - self.add_result(True, "Client reported non-zero throughput") + if generator.average > 0: + self.add_result(True, "Generator reported non-zero throughput") else: - self.add_result(False, "Client reported zero throughput") + self.add_result(False, "Generator reported zero throughput")
- if server.average > 0: - self.add_result(True, "Server reported non-zero throughput") + if receiver.average > 0: + self.add_result(True, "Receiver reported non-zero throughput") else: - self.add_result(False, "Server reported zero throughput") + self.add_result(False, "Receiver reported zero throughput")
def perf_report(self, perf_conf, results, baseline): - client, server = results - - self.add_result(True, - "Client measured throughput: {tput} +-{deviation} {unit} per second" - .format(tput=client.average, - deviation=client.std_deviation, - unit=client.unit), - data = client) - self.add_result(True, - "Server measured throughput: {tput} +-{deviation} {unit} per second" - .format(tput=server.average, - deviation=server.std_deviation, - unit=server.unit), - data = server) + generator, receiver = results + + self.add_result( + True, + "Generator measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second" + .format(tput=generator.average, + deviation=generator.std_deviation, + percentage=(generator.std_deviation/generator.average) * 100, + unit=generator.unit), + data = generator) + self.add_result( + True, + "Receiver measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second" + .format(tput=receiver.average, + deviation=receiver.std_deviation, + percentage=(receiver.std_deviation/receiver.average) * 100, + unit=receiver.unit), + data = receiver) diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index fdad50e..ea9459e 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -186,11 +186,11 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate):
for perf_test in self.params.perf_tests: yield PerfConf(perf_tool = self.params.perf_tool, - client = client_netns, - client_bind = client_bind, - server = server_netns, - server_bind = server_bind, test_type = perf_test, + generator = client_netns, + generator_bind = client_bind, + receiver = server_netns, + receiver_bind = server_bind, msg_size = self.params.perf_msg_size, duration = self.params.perf_duration, iterations = self.params.perf_iterations,
From: Ondrej Lichtner olichtne@redhat.com
There's a better way to track packages that should be installed now, it uses the find_packages function from the setuptools package and is completely automatic.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- setup.py | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/setup.py b/setup.py index 872d2c1..3a2c30e 100755 --- a/setup.py +++ b/setup.py @@ -22,7 +22,7 @@ import re import gzip import os from time import gmtime, strftime -from distutils.core import setup +from setuptools import setup, find_packages from lnst.Common.Version import lnst_version
def process_template(template_path, values): @@ -104,8 +104,6 @@ For detailed description of the architecture of LNST please refer to project website https://fedorahosted.org/lnst. """
-PACKAGES = ["lnst", "lnst.Common", "lnst.Controller", "lnst.Slave", - "lnst.RecipeCommon", "lnst.Recipes", "lnst.Devices", "lnst.Tests" ] SCRIPTS = ["lnst-ctl", "lnst-slave", "lnst-pool-wizard"]
RECIPE_FILES = [] @@ -192,6 +190,6 @@ setup(name="lnst", long_description=LONG_DESC, platforms=["linux"], license=["GNU GPLv2"], - packages=PACKAGES, + packages=find_packages(), scripts=SCRIPTS, data_files=DATA_FILES)
From: Ondrej Lichtner olichtne@redhat.com
This is a very basic simple proxy class for the libvirt library, it's intended to be instantiated dynamically on the slave to allow manipulating it's guests during test execution.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/LibvirtControl.py | 41 +++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) create mode 100644 lnst/RecipeCommon/LibvirtControl.py
diff --git a/lnst/RecipeCommon/LibvirtControl.py b/lnst/RecipeCommon/LibvirtControl.py new file mode 100644 index 0000000..09ded16 --- /dev/null +++ b/lnst/RecipeCommon/LibvirtControl.py @@ -0,0 +1,41 @@ +import logging +import libvirt +from libvirt import libvirtError +from lnst.Common.LnstError import LnstError +from lnst.Common.Logs import log_exc_traceback + +class LibvirtControl(object): + def __init__(self): + self._libvirt_conn = libvirt.open(None) + + def createXML(self, xml, flags=0): + try: + self._libvirt_conn.createXML(xml, flags) + except: + log_exc_traceback() + + def vm_start(self, name): + vm = self._libvirt_conn.lookupByName(name) + vm.create() + + def vm_shutdown(self, name): + vm = self._libvirt_conn.lookupByName(name) + try: + vm.shutdown() + except: + log_exc_traceback() + + def vm_destroy(self, name): + vm = self._libvirt_conn.lookupByName(name) + try: + vm.destroy() + except: + log_exc_traceback() + + def vm_XMLDesc(self, name): + vm = self._libvirt_conn.lookupByName(name) + return vm.XMLDesc() + + def is_vm_running(self, name): + vm = self._libvirt_conn.lookupByName(name) + return vm.isActive()
From: Ondrej Lichtner olichtne@redhat.com
This module implements two test module classes - the TRexClient and the TRexServer. The TRexServer can be used to start a generator server process on the background. The client can then be used to control the server generator process to generate and measure traffic performance.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/TRex.py | 159 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 159 insertions(+) create mode 100644 lnst/Tests/TRex.py
diff --git a/lnst/Tests/TRex.py b/lnst/Tests/TRex.py new file mode 100644 index 0000000..aa65231 --- /dev/null +++ b/lnst/Tests/TRex.py @@ -0,0 +1,159 @@ +import os +import sys +import yaml +import time +import logging +import subprocess +import tempfile +import signal +from lnst.Common.Parameters import Param, StrParam, IntParam, FloatParam +from lnst.Common.Parameters import IpParam, DeviceOrIpParam +from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError + +class TRexCommon(BaseTestModule): + trex_dir = StrParam(mandatory=True) + +class TRexClient(TRexCommon): + #make Int List + ports = Param(mandatory=True) + + flows = Param(mandatory=True) + + duration = IntParam(mandatory=True) + warmup_time = IntParam(default=5) + + msg_size = IntParam(default=64) + + server_hostname = StrParam(default="localhost") + trex_stl_path = 'trex_client/interactive' + + def runtime_estimate(self): + _duration_overhead = 5 + return (self.params.duration + + self.params.warmup_time + + _duration_overhead) + + def run(self): + sys.path.insert(0, os.path.join(self.params.trex_dir, + self.trex_stl_path)) + + from trex.stl import api as trex_api + + try: + return self._run(trex_api) + except trex_api.TRexError as e: + #TRex errors aren't picklable so we wrap them like this + raise TestModuleError(str(e)) + + def _run(self, trex_api): + client = trex_api.STLClient(server=self.params.server_hostname) + client.connect() + + self._res_data = {} + + try: + client.acquire(ports=self.params.ports, force=True) + except: + self._res_data["msg"] = "Failed to acquire ports" + return False + + try: + client.reset(ports=self.params.ports) + except: + client.release(ports=self.params.ports) + self._res_data["msg"] = "Failed to reset ports" + return False + + for i, (src, dst) in enumerate(self.params.flows): + L2 = trex_api.Ether( + src=str(src["mac_addr"]), + dst=str(dst["mac_addr"])) + L3 = trex_api.IP( + src=str(src["ip_addr"]), + dst=str(dst["ip_addr"])) + L4 = trex_api.UDP() + base_pkt = L2/L3/L4 + + pad = max(0, self.params.msg_size - len(base_pkt)) * 'x' + packet = base_pkt/pad + + trex_packet = trex_api.STLPktBuilder(pkt=packet) + + trex_stream = trex_api.STLStream( + packet=trex_packet, + mode=trex_api.STLTXCont(percentage=100)) + + port = self.params.ports[i] + client.add_streams(trex_stream, ports=[port]) + + client.set_port_attr(ports=self.params.ports, promiscuous=True) + + + measurements = [] + + client.start(ports=self.params.ports) + + time.sleep(self.params.warmup_time) + + client.clear_stats(ports=self.params.ports) + self._res_data["start_time"] = time.time() + + for i in range(self.params.duration): + time.sleep(1) + measurements.append(dict(timestamp=time.time(), + measurement=client.get_stats( + ports=self.params.ports, + sync_now=True))) + + client.stop(ports=self.params.ports) + client.release(ports=self.params.ports) + + self._res_data["data"] = measurements + return True + +class TRexServer(TRexCommon): + #TODO make ListParam + flows = Param(mandatory=True) + + cores = Param(mandatory=True) + + def run(self): + trex_server_conf = [{'port_limit': len(self.params.flows), + 'version': 2, + 'interfaces': [], + 'platform': { + 'dual_if': [{ + 'socket': 0, + 'threads': self.params.cores}], + 'latency_thread_id': 0, + 'master_thread_id': 1}, + 'port_info': []}] + + for src, dst in self.params.flows: + short_pci_addr = src["pci_addr"].partition(':')[2] + trex_server_conf[0]['interfaces'].append(short_pci_addr) + trex_server_conf[0]['port_info'].append( + {'src_mac': str(src["mac_addr"]), + 'dest_mac': str(dst["mac_addr"])}) + + with tempfile.NamedTemporaryFile() as cfg_file: + yaml.dump(trex_server_conf, cfg_file) + cfg_file.flush() + os.fsync(cfg_file.file.fileno()) + + os.chdir(self.params.trex_dir) + server = subprocess.Popen( + [os.path.join(self.params.trex_dir, "t-rex-64"), + "--cfg", cfg_file.name, "-i"], + stdin=open('/dev/null'), stdout=open('/dev/null','w'), + stderr=subprocess.PIPE, close_fds=True) + + self.wait_for_interrupt() + + server.send_signal(signal.SIGINT) + out, err = server.communicate() + if err: + logging.error(err) + return False + + return True
From: Ondrej Lichtner olichtne@redhat.com
The TestPMD test module can be used to start a testpmd process that comes with dpdk as a background process.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/TestPMD.py | 47 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+) create mode 100644 lnst/Tests/TestPMD.py
diff --git a/lnst/Tests/TestPMD.py b/lnst/Tests/TestPMD.py new file mode 100644 index 0000000..dd26c88 --- /dev/null +++ b/lnst/Tests/TestPMD.py @@ -0,0 +1,47 @@ +import logging +import subprocess +import signal +from lnst.Common.Parameters import Param, StrParam, IntParam, FloatParam +from lnst.Common.Parameters import IpParam, DeviceOrIpParam +from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError + +class TestPMD(BaseTestModule): + coremask = StrParam(mandatory=True) + + #TODO make ListParam + nics = Param(mandatory=True) + peer_macs = Param(mandatory=True) + + def format_command(self): + testpmd_args = ["testpmd", + "-c", self.params.coremask, + "-n", "4", "--socket-mem", "1024,0"] + for nic in self.params.nics: + testpmd_args.extend(["-w", nic]) + + testpmd_args.extend(["--", "-i", "--forward-mode", "mac"]) + + for i, mac in enumerate(self.params.peer_macs): + testpmd_args.extend(["--eth-peer", "{},{}".format(i, mac)]) + + return " ".join(testpmd_args) + + + def run(self): + cmd = self.format_command() + process = subprocess.Popen(cmd, shell=True, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + close_fds=True) + + process.stdin.write("start tx_first\n") + + self.wait_for_interrupt() + + process.stdin.write("stop\n") + process.stdin.write("quit\n") + + out, err = process.communicate() + self._res_data = {"stdout": out, "stderr": err} + return True
From: Ondrej Lichtner olichtne@redhat.com
Implements the PerfMeasurementTool API using the TRexServer and TRexClient test modules for test execution and PerfConf for the configuration of the performance test.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/TRexMeasurementTool.py | 87 ++++++++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 lnst/RecipeCommon/TRexMeasurementTool.py
diff --git a/lnst/RecipeCommon/TRexMeasurementTool.py b/lnst/RecipeCommon/TRexMeasurementTool.py new file mode 100644 index 0000000..96abdc2 --- /dev/null +++ b/lnst/RecipeCommon/TRexMeasurementTool.py @@ -0,0 +1,87 @@ +import time +import signal +import logging +from lnst.Common.IpAddress import ipaddress +from lnst.Controller.Recipe import RecipeError +from lnst.Controller.RecipeResults import ResultLevel +from lnst.RecipeCommon.Perf import PerfConf, PerfMeasurementTool +from lnst.RecipeCommon.PerfResult import PerfInterval, StreamPerf +from lnst.RecipeCommon.PerfResult import MultiStreamPerf + +from lnst.Tests.TRex import TRexServer, TRexClient + +class TRexMeasurementTool(PerfMeasurementTool): + def __init__(self, trex_dir): + self._trex_dir = trex_dir + + def perf_measure(self, perf_conf): + generator = perf_conf.generator + + flows = [] + for src, dst in zip(perf_conf.generator_bind, perf_conf.receiver_bind): + flows.append(( + dict(mac_addr=src.hwaddr, + pci_addr=src.bus_info, + ip_addr=src.ips[0]), + dict(mac_addr=dst.hwaddr, + pci_addr=dst.bus_info, + ip_addr=dst.ips[0]))) + + try: + server = generator.run( + TRexServer( + trex_dir=self._trex_dir, + flows=flows, + cores=["2", "3", "4"]), + bg=True) + + #wait for server to start up + #TODO better options?? + time.sleep(5) + + test = TRexClient( + trex_dir=self._trex_dir, + ports=range(len(flows)), + flows=flows, + duration=perf_conf.duration, + msg_size=perf_conf.msg_size) + client = generator.run( + test, + timeout=test.runtime_estimate()) + finally: + server.kill(signal.SIGINT) + if not server.wait(5): + server.kill(signal.SIGKILL) + + client_result = None + if client.passed: + tx_result = MultiStreamPerf() + rx_result = MultiStreamPerf() + for port in range(len(flows)): + tx_stream = StreamPerf() + rx_stream = StreamPerf() + + prev_time = client.result["start_time"] + prev_tx_val = 0 + prev_rx_val = 0 + for i in client.result["data"]: + time_delta = i["timestamp"] - prev_time + tx_delta = i["measurement"][port]["opackets"] - prev_tx_val + rx_delta = i["measurement"][port]["ipackets"] - prev_rx_val + tx_stream.append(PerfInterval( + tx_delta, + time_delta, + "pkts")) + rx_stream.append(PerfInterval( + rx_delta, + time_delta, + "pkts")) + + prev_time = i["timestamp"] + prev_tx_val = i["measurement"][port]["opackets"] + prev_rx_val = i["measurement"][port]["ipackets"] + + tx_result.append(tx_stream) + rx_result.append(rx_stream) + + return tx_result, rx_result
From: Ondrej Lichtner olichtne@redhat.com
This recipe is ported from the old recipes/regression_tests/phase3/ovs_dpdk_pvp.xml recipe. The recipe is also redesigned and refactored to better fit the design of our ENRT recipes. I expect that the recipe will be further improved later but for now this should be functional and a good base for further refactoring when needed.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Recipes/ENRT/OvS_DPDK_PvP.py | 399 ++++++++++++++++++++++++++++++ 1 file changed, 399 insertions(+) create mode 100644 lnst/Recipes/ENRT/OvS_DPDK_PvP.py
diff --git a/lnst/Recipes/ENRT/OvS_DPDK_PvP.py b/lnst/Recipes/ENRT/OvS_DPDK_PvP.py new file mode 100644 index 0000000..1f963f6 --- /dev/null +++ b/lnst/Recipes/ENRT/OvS_DPDK_PvP.py @@ -0,0 +1,399 @@ +import logging +import time +import signal +import xml.etree.ElementTree as ET + +from lnst.Controller import HostReq, DeviceReq, RecipeParam +from lnst.Common.Logs import log_exc_traceback +from lnst.Common.Parameters import Param, IntParam, StrParam, BoolParam +from lnst.Common.IpAddress import ipaddress +from lnst.RecipeCommon.Ping import PingTestAndEvaluate, PingConf +from lnst.Tests import Ping +from lnst.Tests.TestPMD import TestPMD +from lnst.RecipeCommon.Perf import PerfTestAndEvaluate, PerfConf +from lnst.RecipeCommon.TRexMeasurementTool import TRexMeasurementTool + +from lnst.RecipeCommon.LibvirtControl import LibvirtControl + +from lnst.Recipes.ENRT.BaseEnrtRecipe import EnrtConfiguration + +class PvPTestConf(object): + class HostConf(object): + def __init__(self): + self.host = None + self.nics = [] + + class DUTConf(HostConf): + def __init__(self): + super(PvPTestConf.DUTConf, self).__init__() + self.trex_path = "" + self.dpdk_ports = None + self.vm_ports = None + + class GuestConf(HostConf): + def __init__(self): + super(PvPTestConf.GuestConf, self).__init__() + self.name = "" + self.virtctl = None + self.testpmd = None + self.vhost_nics = None + + def __init__(self): + self.generator = self.HostConf() + self.dut = self.DUTConf() + self.guest = self.GuestConf() + +class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): + m1 = HostReq() + m1.eth0 = DeviceReq(label="net1", driver=RecipeParam("driver")) + m1.eth1 = DeviceReq(label="net1", driver=RecipeParam("driver")) + + m2 = HostReq(has_guest="True") + m2.eth0 = DeviceReq(label="net1", driver=RecipeParam("driver")) + m2.eth1 = DeviceReq(label="net1", driver=RecipeParam("driver")) + + driver = StrParam(mandatory=True) + + trex_dir = StrParam(mandatory=True) + + guest_name = StrParam(mandatory=True) + guest_cpus = StrParam(mandatory=True) + guest_emulatorpin_cpu = StrParam(mandatory=True) + guest_dpdk_cores = StrParam(mandatory=True) + guest_mem_size = IntParam(default=16777216) + + host1_dpdk_cores = StrParam(mandatory=True) + host2_pmd_cores = StrParam(mandatory=True) + host2_l_cores = StrParam(mandatory=True) + nr_hugepages = IntParam(default=2048) + socket_mem = IntParam(default=2048) + + dev_intr_cpu = IntParam(default=0) + + + perf_duration = IntParam(default=60) + perf_iterations = IntParam(default=5) + perf_msg_size = IntParam(default=64) + + #doesn't do anything for now... + perf_streams = IntParam(default=1) + + perf_usr_comment = StrParam(default="") + + def test(self): + self.check_dependencies() + self.warmup() + self.pvp_test() + + def check_dependencies(self): + pass + + def warmup(self): + try: + self.warmup_configuration() + self.warmup_pings() + finally: + self.warmup_deconfiguration() + + def warmup_configuration(self): + m1, m2 = self.matched.m1, self.matched.m2 + m1.eth0.ip_add(ipaddress("192.168.1.1/24")) + m1.eth1.ip_add(ipaddress("192.168.1.3/24")) + + m2.eth0.ip_add(ipaddress("192.168.1.2/24")) + m2.eth1.ip_add(ipaddress("192.168.1.4/24")) + + def warmup_pings(self): + m1, m2 = self.matched.m1, self.matched.m2 + + jobs = [] + jobs.append(m1.run(Ping(interface=m1.eth0.ips[0], dst=m2.eth0.ips[0]), bg=True)) + jobs.append(m1.run(Ping(interface=m1.eth1.ips[0], dst=m2.eth1.ips[0]), bg=True)) + jobs.append(m2.run(Ping(interface=m2.eth0.ips[0], dst=m1.eth0.ips[0]), bg=True)) + jobs.append(m2.run(Ping(interface=m2.eth1.ips[0], dst=m1.eth1.ips[0]), bg=True)) + + for job in jobs: + job.wait() + #TODO eval + + def warmup_deconfiguration(self): + m1, m2 = self.matched.m1, self.matched.m2 + m1.eth0.ip_flush() + m1.eth1.ip_flush() + + m2.eth0.ip_flush() + m2.eth1.ip_flush() + + def pvp_test(self): + try: + config = PvPTestConf() + self.test_wide_configuration(config) + + perf_config = self.generate_perf_config(config) + result = self.perf_test(perf_config) + self.perf_evaluate_and_report(perf_config, result, baseline=None) + finally: + self.test_wide_deconfiguration(config) + + def test_wide_configuration(self, config): + config.generator.host = self.matched.m1 + config.generator.nics.append(self.matched.m1.eth0) + config.generator.nics.append(self.matched.m1.eth1) + self.matched.m1.eth0.ip_add(ipaddress("192.168.1.1/24")) + self.matched.m1.eth1.ip_add(ipaddress("192.168.1.3/24")) + self.base_dpdk_configuration(config.generator) + + config.dut.host = self.matched.m2 + config.dut.nics.append(self.matched.m2.eth0) + config.dut.nics.append(self.matched.m2.eth1) + self.matched.m2.eth0.ip_add(ipaddress("192.168.1.2/24")) + self.matched.m2.eth1.ip_add(ipaddress("192.168.1.4/24")) + self.base_dpdk_configuration(config.dut) + self.ovs_dpdk_bridge_configuration(config.dut) + + self.init_guest_virtctl(config.dut, config.guest) + self.shutdown_guest(config.guest) + self.configure_guest_xml(config.dut, config.guest) + + self.ovs_dpdk_bridge_vm_configuration(config.dut, config.guest) + self.ovs_dpdk_bridge_flow_configuration(config.dut) + + guest = self.create_guest(config.dut, config.guest) + self.guest_vfio_modprobe(config.guest) + self.base_dpdk_configuration(config.guest) + + config.guest.testpmd = guest.run( + TestPMD( + coremask=self.params.guest_dpdk_cores, + nics=[nic.bus_info for nic in config.guest.nics], + peer_macs=[nic.hwaddr for nic in config.generator.nics]), + bg=True) + + time.sleep(5) + return config + + def generate_perf_config(self, config): + conf = PerfConf( + perf_tool = TRexMeasurementTool(self.params.trex_dir), + test_type = "pvp_loop_rate", + generator = config.generator.host, + generator_bind = config.generator.nics, + receiver = config.dut.host, + receiver_bind = config.dut.nics, + msg_size = self.params.perf_msg_size, + duration = self.params.perf_duration, + iterations = self.params.perf_iterations, + streams = self.params.perf_streams) + return conf + + def test_wide_deconfiguration(self, config): + try: + self.guest_deconfigure(config.guest) + except: + log_exc_traceback() + + try: + config.dut.host.run("ovs-ofctl del-flows br0") + for vm_port, port_id in config.dut.vm_ports: + config.dut.host.run("ovs-vsctl del-port br0 {}".format(vm_port)) + for dpdk_port, port_id in config.dut.dpdk_ports: + config.dut.host.run("ovs-vsctl del-port br0 {}".format(dpdk_port)) + config.dut.host.run("ovs-vsctl del-br br0") + config.dut.host.run("service openvswitch restart") + + self.base_dpdk_deconfiguration(config.dut) + except: + log_exc_traceback() + + try: + #returning the guest to the original running state + self.shutdown_guest(config.guest) + config.guest.virtctl.vm_start(config.guest.name) + except: + log_exc_traceback() + + try: + for nic in config.generator.nics: + config.generator.host.run( + "driverctl unset-override {}".format(nic.bus_info)) + + config.generator.host.run("service irqbalance start") + except: + log_exc_traceback() + + def base_dpdk_configuration(self, dpdk_host_cfg): + host = dpdk_host_cfg.host + + for nic in dpdk_host_cfg.nics: + nic.enable_readonly_cache() + + #TODO service should be a host method + host.run("service irqbalance stop") + + # this will pin all irqs to cpu #0 + self._pin_irqs(host, 0) + host.run("echo -n {} /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages" + .format(self.params.nr_hugepages)) + + host.run("modprobe vfio-pci") + for nic in dpdk_host_cfg.nics: + host.run("driverctl set-override {} vfio-pci".format(nic.bus_info)) + + def base_dpdk_deconfiguration(self, dpdk_host_cfg): + host = dpdk_host_cfg.host + #TODO service should be a host method + host.run("service irqbalance start") + for nic in dpdk_host_cfg.nics: + job = host.run("driverctl unset-override {}".format(nic.bus_info), + bg=True) + if isinstance(dpdk_host_cfg, PvPTestConf.DUTConf): + host.run("systemctl restart openvswitch") + + if not job.wait(10): + job.kill() + + def ovs_dpdk_bridge_configuration(self, host_conf): + host = host_conf.host + host.run("systemctl enable openvswitch") + host.run("systemctl start openvswitch") + host.run("ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true") + host.run("ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem={}" + .format(self.params.socket_mem)) + host.run("ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask={}" + .format(self.params.host2_pmd_cores)) + host.run("ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask={}" + .format(self.params.host2_l_cores)) + host.run("systemctl restart openvswitch") + + host.run("systemctl restart openvswitch") + + #TODO use an actual OvS Device object + #TODO config.dut.nics.append(CachedRemoteDevice(m2.ovs)) + host.run("ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev") + + host_conf.dpdk_ports = [] + for i, nic in enumerate(host_conf.nics): + host.run("ovs-vsctl add-port br0 dpdk{i} -- " + "set interface dpdk{i} type=dpdk ofport_request=1{i} " + "options:dpdk-devargs={pci_addr}".format( + i=i, pci_addr=nic.bus_info)) + host_conf.dpdk_ports.append( + ("dpdk{}".format(i), "1{}".format(i))) + + def init_guest_virtctl(self, host_conf, guest_conf): + host = host_conf.host + + guest_conf.name = self.params.guest_name + guest_conf.virtctl = host.init_class(LibvirtControl) + + def shutdown_guest(self, guest_conf): + virtctl = guest_conf.virtctl + virtctl.vm_shutdown(guest_conf.name) + self.ctl.wait_for_condition(lambda: + not virtctl.is_vm_running(guest_conf.name)) + + def configure_guest_xml(self, host_conf, guest_conf): + virtctl = guest_conf.virtctl + guest_xml = ET.fromstring(virtctl.vm_XMLDesc(guest_conf.name)) + guest_conf.libvirt_xml = guest_xml + + guest_conf.vhost_nics = [] + vhosts = guest_conf.vhost_nics + for i, nic in enumerate(host_conf.nics): + path = self._xml_add_vhostuser_dev( + guest_xml, "vhost_nic{i}".format(i=i), nic.hwaddr) + vhosts.append((path, nic.hwaddr)) + + cpu = guest_xml.find("cpu") + numa = ET.SubElement(cpu, 'numa') + ET.SubElement(numa, 'cell', id='0', cpus='0', + memory=str(self.params.guest_mem_size), unit='KiB', + memAccess='shared') + + cputune = ET.SubElement(guest_xml, "cputune") + for i, cpu_id in enumerate(self.params.guest_cpus.split(',')): + ET.SubElement(cputune, "vcpupin", vcpu=str(i), cpuset=str(cpu_id)) + + ET.SubElement(cputune, + "emulatorpin", + cpuset=str(self.params.guest_emulatorpin_cpu)) + + return guest_xml + + def ovs_dpdk_bridge_vm_configuration(self, host_conf, guest_conf): + host = host_conf.host + host_conf.vm_ports = [] + for i, nic in enumerate(guest_conf.vhost_nics): + host.run( + "ovs-vsctl add-port br0 guest_nic{i} -- " + "set interface guest_nic{i} type=dpdkvhostuserclient " + "ofport_request=2{i} " + "options:vhost-server-path={path}".format( + i=i, path=nic[0])) + host_conf.vm_ports.append( + ("guest_nic{}".format(i), "2{}".format(i))) + + def ovs_dpdk_bridge_flow_configuration(self, host_conf): + host = host_conf.host + host.run("ovs-ofctl del-flows br0") + for dpdk_port, vm_port in zip(host_conf.dpdk_ports, host_conf.vm_ports): + host.run("ovs-ofctl add-flow br0 in_port={},action={}" + .format(dpdk_port[1], vm_port[1])) + host.run("ovs-ofctl add-flow br0 in_port={},action={}" + .format(vm_port[1], dpdk_port[1])) + + def create_guest(self, host_conf, guest_conf): + host = host_conf.host + virtctl = guest_conf.virtctl + guest_xml = guest_conf.libvirt_xml + + virtctl.createXML(ET.tostring(guest_xml)) + + guest_ip_job = host.run("gethostip -d {}".format(guest_conf.name)) + guest_ip = guest_ip_job.stdout.strip() + + guest = self.ctl.connect_host(guest_ip, timeout=60) + guest_conf.host = guest + + for i, nic in enumerate(guest_conf.vhost_nics): + guest.map_device("eth{}".format(i), dict(hwaddr=nic[1])) + device = getattr(guest, "eth{}".format(i)) + guest_conf.nics.append(device) + return guest + + def guest_vfio_modprobe(self, guest_conf): + guest = guest_conf.host + guest.run("modprobe -r vfio_iommu_type1") + guest.run("modprobe -r vfio") + guest.run("modprobe vfio enable_unsafe_noiommu_mode=1") + guest.run("modprobe vfio-pci") + + def guest_deconfigure(self, guest_conf): + guest = guest_conf.host + if not guest: + return + + testpmd = guest_conf.testpmd + if testpmd and not testpmd.finished: + testpmd.kill(signal.SIGINT) + testpmd.wait() + + self.base_dpdk_deconfiguration(guest_conf) + + def _xml_add_vhostuser_dev(self, guest_xml, name, mac_addr): + vhost_server_path = "/tmp/{}".format(name) + devices = guest_xml.find("devices") + + interface = ET.SubElement(devices, 'interface', type='vhostuser') + ET.SubElement(interface, 'mac', address=str(mac_addr)) + ET.SubElement(interface, 'model', type='virtio') + ET.SubElement(interface, 'source', type='unix', + path=vhost_server_path, mode='server') + return vhost_server_path + + def _pin_irqs(self, host, cpu): + mask = 1 << cpu + host.run("MASK={:x}; " + "for i in `ls -d /proc/irq/[0-9]*` ; " + "do echo $MASK > ${{i}}/smp_affinity ; " + "done".format(cpu))
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Recipes/ENRT/BaseEnrtRecipe.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index ea9459e..bf08735 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -165,9 +165,9 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): server_bind = server_nic.ips_filter(family=family)[0]
yield PingConf(client = client_netns, - client_bind = client_bind, - destination = server_netns, - destination_address = server_bind) + client_bind = client_bind, + destination = server_netns, + destination_address = server_bind)
def generate_perf_configurations(self, main_config, sub_config): client_nic = main_config.endpoint1
From: Ondrej Lichtner olichtne@redhat.com
The default for all devices should be that bulk mode is disabled, however it's required for SoftDevices to be enabled while they're being created. After the creation is done it should be disabled again.
Thanks to Christos who noticed the issue.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/InterfaceManager.py | 1 + 1 file changed, 1 insertion(+)
diff --git a/lnst/Slave/InterfaceManager.py b/lnst/Slave/InterfaceManager.py index 3801bdb..b5d0091 100644 --- a/lnst/Slave/InterfaceManager.py +++ b/lnst/Slave/InterfaceManager.py @@ -244,6 +244,7 @@ class InterfaceManager(object): except KeyError as e: raise DeviceConfigError("%s is a mandatory argument" % e) device._create() + device._bulk_enabled = False
devs = scan_netdevs() for dev in devs:
lnst-developers@lists.fedorahosted.org