[DISCUSSION] python version requirement
by Ondrej Lichtner
Hi all,
since we've moved to python3 that is actively developed and versions
move between various long term/short term support cycles, we should also
adapt LNST to this cycle of updating which minimal version of python
LNST requires.
TL;DR: The main questions I'm asking are:
* how do we _implement_ a python version requirement?
* how do we _upgrade_/_migrate_ in the future?
* how do we _document_ a python version requirement?
* which version do we want to use _now_?
More context:
I think at the moment we have a "soft" requirement for python3.6. Soft
because we:
* probably haven't tested on anything older
* it's not explicitly configured/documented anywhere
At the same time, there are now at least two reasons to start thinking
about moving to python3.8:
* I remember Perry asking about the f-string feature introduced in 3.8
* while working with Adrian on the TRex refactoring I started thinking
about a feature for the lnst.Tests package that I've had in mind for a
while, which requires a 3.7 feature
* python3.8 is the current version on Fedora32, is available in RHEL8
(via dnf install python38), and python3.7 was skipped
The lnst.Tests feature I'm thinking of is "lazy" and "dynamic" loading
of BaseTestModule derived modules - for example at the moment, if a
Recipe imports any module from lnst.Tests (e.g. lnst.Tests.Ping), the
entire package is parsed and "loaded", which means that the python
environment will also parse and load lnst.Tests.TRex. This means that a
basic hello world recipe that simply calls Ping, will in some way
require load time dependencies of TRex.
The "lazy" and "dynamic" loading of test modules would ensure that when
a recipe calls:
from lnst.Tests import Ping
Only the Ping module will be parsed, loaded and imported, and nothing
else. And the dynamicity here could mean that we could be able to extend
test modules exported by the lnst.Tests package via the lnst-ctl config
file, for example for user/tester implemented test modules that are not
tracked in the main lnst repository.
I wrote a rough patch to experiment with this:
---
diff --git a/lnst/Tests/__init__.py b/lnst/Tests/__init__.py
index f7c6c90..a39b6f4 100644
--- a/lnst/Tests/__init__.py
+++ b/lnst/Tests/__init__.py
@@ -12,8 +12,26 @@
olichtne(a)redhat.com (Ondrej Lichtner)
"""
-from lnst.Tests.Ping import Ping
-from lnst.Tests.PacketAssert import PacketAssert
-from lnst.Tests.Iperf import IperfClient, IperfServer
+# from lnst.Tests.Ping import Ping
+# from lnst.Tests.PacketAssert import PacketAssert
+# from lnst.Tests.Iperf import IperfClient, IperfServer
+import importlib
+
+lazy_load_modules = {
+ "Ping": "lnst.Tests.Ping",
+ "PacketAssert": "lnst.Tests.PacketAssert",
+ "IperfClient": "lnst.Tests.Iperf",
+ "IperfServer": "lnst.Tests.Iperf",
+}
+
+
+def __getattr__(name):
+ if name not in lazy_load_modules:
+ raise ImportError("Cannot import {}".format(name))
+ mod = importlib.import_module(lazy_load_modules[name])
+ globals()[name] = getattr(mod, name)
+ return globals()[name]
+
+
+# #TODO add support for test classes from lnst-ctl.conf
-#TODO add support for test classes from lnst-ctl.conf
---
However this requires the ability to define __getattr__ for a module,
which is introduced as a python3.7 feature via PEP562 [0].
-Ondrej
[0] https://www.python.org/dev/peps/pep-0562/
2 years, 5 months
[PATCH] Tests.PacketAssert: continuosly read tcpdump stdout
by olichtne@redhat.com
From: Ondrej Lichtner <olichtne(a)redhat.com>
When PacketAssert is used for longer or larger streams the buffer for
the pipe file used for stdout can be fully filled and tcpdump will get
stuck on the write syscall. After that when the SIGINT comes from the
recipe this syscall is interrupted and tcpdump quits with:
tcpdump: Unable to write output: Interrupted system call
The fix in this patch is in two parts:
* use the '-U' argument which makes the tcpdump output packet buffered -
fflush is called after each packet is analyzeed and printed.
* rewrite the wait_for_interrupt use with custom SIGINT handling that
continuosly reads lines from stdout or stderr so that the buffer is
never full
Eventually similar functionality may be useful for other Test modules as
well, at that point this should be refactored.
Signed-off-by: Ondrej Lichtner <olichtne(a)redhat.com>
---
lnst/Tests/PacketAssert.py | 32 ++++++++++++++++++++++++++------
1 file changed, 26 insertions(+), 6 deletions(-)
diff --git a/lnst/Tests/PacketAssert.py b/lnst/Tests/PacketAssert.py
index 5fd7abfd..9b623b0b 100644
--- a/lnst/Tests/PacketAssert.py
+++ b/lnst/Tests/PacketAssert.py
@@ -1,6 +1,8 @@
import re
import logging
import subprocess
+import signal
+from select import select
from lnst.Common.Parameters import (
StrParam,
ListParam,
@@ -10,10 +12,13 @@
)
from lnst.Devices.Device import Device
from lnst.Common.Utils import is_installed
-from lnst.Tests.BaseTestModule import BaseTestModule
+from lnst.Tests.BaseTestModule import BaseTestModule, InterruptException
from lnst.Common.LnstError import LnstError
+def interrupt_handler(signum, frame):
+ raise InterruptException()
+
class PacketAssert(BaseTestModule):
interface = DeviceParam(mandatory=True)
p_filter = StrParam(default="")
@@ -33,7 +38,7 @@ def _compose_cmd(self):
cmd += " -p"
iface = self.params.interface.name
filt = self.params.p_filter
- cmd += ' -nn -i %s "%s"' % (iface, filt)
+ cmd += ' -nn -U -i %s "%s"' % (iface, filt)
return cmd
@@ -63,12 +68,27 @@ def run(self):
close_fds=True,
)
+ stdout = bytearray()
+ stderr = bytearray()
try:
- self.wait_for_interrupt()
- except:
- raise LnstError("Could not handle interrupt properly.")
+ old_handler = signal.signal(signal.SIGINT, interrupt_handler)
+ # for longer, or larger streams tcpdump can fill the entire io
+ # buffer for the stdout/stderr files, because of that we need to
+ # read the files continuosly
+ while True:
+ rl, wl, xl = select([packet_assert_process.stdout, packet_assert_process.stderr], [], [])
+ if packet_assert_process.stdout in rl:
+ stdout += packet_assert_process.stdout.readline()
+ if packet_assert_process.stderr in rl:
+ stderr += packet_assert_process.stderr.readline()
+ except InterruptException:
+ pass
+ finally:
+ signal.signal(signal.SIGINT, old_handler)
- stdout, stderr = packet_assert_process.communicate()
+ packet_assert_process.wait()
+ stdout += packet_assert_process.stdout.read()
+ stderr += packet_assert_process.stderr.read()
stdout = stdout.decode()
stderr = stderr.decode()
--
2.31.0
2 years, 5 months
[PATCH 1/3] installation.rst: Fix incorrect reference in "HelloWorld" documentation
by Mark Gray
Signed-off-by: Mark D. Gray <mark.d.gray(a)redhat.com>
---
docs/source/installation.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/docs/source/installation.rst b/docs/source/installation.rst
index 38df777598b3..c19a2c964402 100644
--- a/docs/source/installation.rst
+++ b/docs/source/installation.rst
@@ -62,12 +62,12 @@ A minimal "hello world" example of an executable test script looks like this:
machine2.nic1 = DeviceReq(label="net1")
def test(self):
- self.matched.m1.nic1.ip_add("192.168.1.1/24")
- self.matched.m1.nic1.up()
- self.matched.m2.nic1.ip_add("192.168.1.2/24")
- self.matched.m2.nic1.up()
+ self.matched.machine1.nic1.ip_add("192.168.1.1/24")
+ self.matched.machine1.nic1.up()
+ self.matched.machine2.nic1.ip_add("192.168.1.2/24")
+ self.matched.machine2.nic1.up()
- self.matched.m1.run("ping 192.168.1.2")
+ self.matched.machine1.run("ping 192.168.1.2")
ctl = Controller()
recipe_instance = HelloWorldRecipe()
--
2.27.0
2 years, 5 months
[PATCH] NeperFlowMeasurement.py: Bumped up server timeout
by pgagne@redhat.com
From: Perry Gagne <pgagne(a)redhat.com>
I found that sometimes when running things like tcp_crr the server needed a little more
time to complete.
Signed-off-by: Perry Gagne <pgagne(a)redhat.com>
---
lnst/RecipeCommon/Perf/Measurements/NeperFlowMeasurement.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lnst/RecipeCommon/Perf/Measurements/NeperFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/NeperFlowMeasurement.py
index 236c2c9b..1f667fb5 100644
--- a/lnst/RecipeCommon/Perf/Measurements/NeperFlowMeasurement.py
+++ b/lnst/RecipeCommon/Perf/Measurements/NeperFlowMeasurement.py
@@ -51,7 +51,7 @@ class NeperFlowMeasurement(BaseFlowMeasurement):
for flow in test_flows:
client_neper = flow.client_job.what
flow.client_job.wait(timeout=client_neper.runtime_estimate())
- flow.server_job.wait(timeout=5)
+ flow.server_job.wait(timeout=10)
finally:
for flow in test_flows:
flow.server_job.kill()
--
2.30.2
2 years, 5 months
[PATCH v2 0/3] ShortLivedConnections fixes
by pgagne@redhat.com
From: Perry Gagne <pgagne(a)redhat.com>
Found a few issues when enabling catalog support for Neper style
results. Also added version information to NeperFlowMeasurementResults
Perry Gagne (3):
NeperFlowMeasurement.py: Add request/response size to server params
NeperFlowMeasurement.py: Add neper version
Neper.py: Fixed handling of stderr.
.../Perf/Measurements/NeperFlowMeasurement.py | 14 ++++++++++++++
lnst/Tests/Neper.py | 5 +++--
2 files changed, 17 insertions(+), 2 deletions(-)
--
2.30.2
2 years, 6 months
[PATCH 0/2] ShortLivedConnections fixes
by pgagne@redhat.com
From: Perry Gagne <pgagne(a)redhat.com>
Found a few issues when enabling catalog support for Neper style
results.
Perry Gagne (2):
NeperFlowMeasurement.py: Add request/response size to server params
Neper.py: Fixed handling of stderr.
.../Perf/Measurements/NeperFlowMeasurement.py | 14 ++++++++++++++
lnst/Tests/Neper.py | 5 +++--
2 files changed, 17 insertions(+), 2 deletions(-)
--
2.30.2
2 years, 6 months
[PATCH v2] RecipeCommon.BaseFlowMeasurement: add flow_results into
result data
by Jan Tluka
To be able to get additional information about the nature of the Flow result
a reference to the FlowMeasurementResults is added to result data.
v2:
- include whole FlowMeasurementResults object instead of just Flow
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py
index 9cc91ab5..af51a2ce 100644
--- a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py
+++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py
@@ -302,7 +302,8 @@ class BaseFlowMeasurement(BaseMeasurement):
generator_flow_data=generator,
generator_cpu_data=generator_cpu,
receiver_flow_data=receiver,
- receiver_cpu_data=receiver_cpu))
+ receiver_cpu_data=receiver_cpu,
+ flow_results=flow_results))
def aggregate_results(self, old, new):
aggregated = []
--
2.26.2
2 years, 6 months
[PATCH] RecipeCommon.BaseFlowMeasurement: add flow into result data
by Jan Tluka
To be able to get additional information about the nature of the Flow result
a reference to the Flow is added to result data.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py
index 9cc91ab5..c5e65044 100644
--- a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py
+++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py
@@ -302,7 +302,8 @@ class BaseFlowMeasurement(BaseMeasurement):
generator_flow_data=generator,
generator_cpu_data=generator_cpu,
receiver_flow_data=receiver,
- receiver_cpu_data=receiver_cpu))
+ receiver_cpu_data=receiver_cpu,
+ flow=flow_results.flow))
def aggregate_results(self, old, new):
aggregated = []
--
2.26.2
2 years, 6 months
[PATCH] Recipes.ENRT.PerfReversibleFlowMixin: adapt
_create_perf_flow to cpupin changes
by Jan Tluka
The IperfMeasurementGenerator added cpupin argument to _create_perf_flow
method so PerfReversibleFlowMixin needs to add this argument as well.
Fixes: e8ddb52d9538e70c39a2c0f1b30d7ae470c2b144
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
lnst/Recipes/ENRT/ConfigMixins/PerfReversibleFlowMixin.py | 3 +++
1 file changed, 3 insertions(+)
diff --git a/lnst/Recipes/ENRT/ConfigMixins/PerfReversibleFlowMixin.py b/lnst/Recipes/ENRT/ConfigMixins/PerfReversibleFlowMixin.py
index 6a157811..05302631 100644
--- a/lnst/Recipes/ENRT/ConfigMixins/PerfReversibleFlowMixin.py
+++ b/lnst/Recipes/ENRT/ConfigMixins/PerfReversibleFlowMixin.py
@@ -33,6 +33,7 @@ class PerfReversibleFlowMixin(object):
server_bind,
server_port,
msg_size,
+ cpupin,
) -> PerfFlow:
if self.params.perf_reverse:
return super()._create_perf_flow(
@@ -43,6 +44,7 @@ class PerfReversibleFlowMixin(object):
client_bind,
server_port,
msg_size,
+ cpupin,
)
else:
return super()._create_perf_flow(
@@ -53,4 +55,5 @@ class PerfReversibleFlowMixin(object):
server_bind,
server_port,
msg_size,
+ cpupin,
)
--
2.26.2
2 years, 6 months
[PATCH] RecipeCommon.BaseFlowMeasurement.Flow: add aggregated_flow
property
by Jan Tluka
This property can be used to distinguish the aggregated flow from the
individual flows used to create the aggregated flow.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
.../Perf/Measurements/BaseFlowMeasurement.py | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py
index 41d5c777..9cc91ab5 100644
--- a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py
+++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py
@@ -19,7 +19,8 @@ class Flow(object):
receiver_nic=None,
receiver_port=None,
msg_size=None,
- cpupin=None):
+ cpupin=None,
+ aggregated_flow=False):
self._type = type
self._generator = generator
@@ -34,6 +35,7 @@ class Flow(object):
self._duration = duration
self._parallel_streams = parallel_streams
self._cpupin = cpupin
+ self._aggregated_flow=aggregated_flow
@property
def type(self):
@@ -83,6 +85,10 @@ class Flow(object):
def cpupin(self):
return self._cpupin
+ @property
+ def aggregated_flow(self):
+ return self._aggregated_flow
+
def __repr__(self):
string = """
Flow(
@@ -98,6 +104,7 @@ class Flow(object):
duration={duration},
parallel_streams={parallel_streams},
cpupin={cpupin},
+ aggregated_flow={aggregated_flow},
)""".format(
type=self.type,
generator=str(self.generator),
@@ -111,6 +118,7 @@ class Flow(object):
duration=self.duration,
parallel_streams=self.parallel_streams,
cpupin=self.cpupin,
+ aggregated_flow=self._aggregated_flow,
)
string = textwrap.dedent(string).strip()
return string
@@ -340,7 +348,8 @@ class BaseFlowMeasurement(BaseMeasurement):
msg_size=sample_flow.msg_size,
duration=sample_flow.duration,
parallel_streams=sample_flow.parallel_streams,
- cpupin=None
+ cpupin=None,
+ aggregated_flow=True,
)
aggregated_result = AggregatedFlowMeasurementResults(
--
2.26.2
2 years, 6 months