From: Ondrej Lichtner olichtne@redhat.com
Hi all,
the core of this patchset is refactorization of the PerfTestAndEvaluate recipe template into the lnst.RecipeCommon.Perf package implementing a template for generic performance measurement tests, moving the current MeasurementTools (Iperf and TRex) to fit this new model and adding CPUUtilization measurements.
All of this is then incorporated into the BaseEnrtRecipe which is currently the main user of the Perf recipe template.
There's also a couple of minor bug fixes and updates to the generic LNST API based on the experience of using them wrt. the main changes of this patchset.
!!!!!!!! Some of these feel more like Proposals at this point though and I'm not sure if they're good ideas. This is mostly related to the prepare_job method of the Namespace class. It should definitely be considered and thought about before fully accepting it. !!!!!!!!
Additional note: this patchset breaks the ENRT/OvS_DPDK_PvP.py due to the reorganization of the PerfRecipe. I wanted to send the patchset ASAP so that it can get some reviews. I'll work on updating the OvS_DPDK_PvP recipe while those reviews are coming in. And I won't merge this patchset without the additional fixes for Ovs_DPDK_PvP...
Thanks,
-Ondrej
Ondrej Lichtner (16): lnst.Common.Utils: change std_deviation calculation lnst.Tests.Iperf: set target bitrate to 0 lnst.Common.Parameters: add ListParam lnst.Tests.Iperf: fix parallel parameter lnst.Tests.Iperf: add runtime_estimate method lnst.Tests.Iperf: cleanup imports lnst.Controller.Job: change wait default timeout lnst.Controller.RecipeResults: add data_level attribute lnst.Controller.RunSummaryFormatter: fix header format lnst.Controller.Namespace: add prepare_job method for delayed start lnst.Controller.Job: expose the what attribute add lnst.Tests.CPUStatMonitor lnst.RecipeCommon.{Perf, PerfResult}: refactoring add lnst.RecipeCommon.Perf.Measurements package lnst.Controller.RecipeResults: rename desc to description lnst.Controller.RunSummaryFormatter: improve multiline result descriptions
lnst/Common/Parameters.py | 18 ++ lnst/Common/Utils.py | 8 +- lnst/Controller/Job.py | 21 +- lnst/Controller/Namespace.py | 5 + lnst/Controller/Recipe.py | 6 +- lnst/Controller/RecipeResults.py | 41 ++-- lnst/Controller/RunSummaryFormatter.py | 10 +- lnst/RecipeCommon/IperfMeasurementTool.py | 83 ------- lnst/RecipeCommon/Perf.py | 120 ----------- .../Perf/Measurements/BaseCPUMeasurement.py | 109 ++++++++++ .../Perf/Measurements/BaseFlowMeasurement.py | 202 ++++++++++++++++++ .../Perf/Measurements/BaseMeasurement.py | 29 +++ .../Perf/Measurements/IperfFlowMeasurement.py | 157 ++++++++++++++ .../Perf/Measurements/MeasurementError.py | 4 + .../Perf/Measurements/StatCPUMeasurement.py | 88 ++++++++ .../Measurements/TRexMeasurement.py} | 0 .../Perf/Measurements/__init__.py | 3 + lnst/RecipeCommon/Perf/Recipe.py | 73 +++++++ .../{PerfResult.py => Perf/Results.py} | 65 +++--- lnst/RecipeCommon/Perf/__init__.py | 0 lnst/Recipes/ENRT/BaseEnrtRecipe.py | 45 ++-- lnst/Tests/CPUStatMonitor.py | 113 ++++++++++ lnst/Tests/Iperf.py | 14 +- 23 files changed, 923 insertions(+), 291 deletions(-) delete mode 100644 lnst/RecipeCommon/IperfMeasurementTool.py delete mode 100644 lnst/RecipeCommon/Perf.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/MeasurementError.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py rename lnst/RecipeCommon/{TRexMeasurementTool.py => Perf/Measurements/TRexMeasurement.py} (100%) create mode 100644 lnst/RecipeCommon/Perf/Measurements/__init__.py create mode 100644 lnst/RecipeCommon/Perf/Recipe.py rename lnst/RecipeCommon/{PerfResult.py => Perf/Results.py} (72%) create mode 100644 lnst/RecipeCommon/Perf/__init__.py create mode 100644 lnst/Tests/CPUStatMonitor.py
From: Ondrej Lichtner olichtne@redhat.com
The old algorithm works and has the advantage of a single pass throught the value array, however in case of identical small values (less than 1) it might encounter an error of calculating the square root of a negative number instead of properly returning 0. Example: [0.031, 0.031, 0.031, 0.031, 0.031]
The new algorithm is less efficient (uses 2 iterations of the value array) but shouldn't have the same issue.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Utils.py | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/lnst/Common/Utils.py b/lnst/Common/Utils.py index 0a903be..f158ff2 100644 --- a/lnst/Common/Utils.py +++ b/lnst/Common/Utils.py @@ -271,12 +271,8 @@ def dict_to_dot(original_dict, prefix=""): def std_deviation(values): if len(values) <= 0: return 0.0 - s1 = 0.0 - s2 = 0.0 - for val in values: - s1 += val - s2 += val**2 - return (math.sqrt(len(values)*s2 - s1**2))/len(values) + avg = sum(values) / float(len(values)) + return math.sqrt(sum([(float(i) - avg)**2 for i in values])/len(values))
def deprecated(func): """
From: Ondrej Lichtner olichtne@redhat.com
This is important for UDP tests where the target bitrate default is 1mbit/s.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index 10bf974..3b5777d 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -136,7 +136,7 @@ class IperfClient(IperfBase): else: test = ""
- cmd = ("iperf3 -c {server} -J -t {duration}" + cmd = ("iperf3 -c {server} -b 0 -J -t {duration}" " {cpu} {test} {mss} {blksize} {parallel}" " {opts}".format( server=self.params.server, duration=self.params.duration,
From: Ondrej Lichtner olichtne@redhat.com
ListParam accepts list objects (not other iterables, such as a tuple or string), and accepts an optional type parameter, if this parameter is provided it will be used to type check each individual item in the list. This can be useful for occasions where you want a parameter that is a list of Integers or Strings.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Parameters.py | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/lnst/Common/Parameters.py b/lnst/Common/Parameters.py index 00c7832..b139d6c 100644 --- a/lnst/Common/Parameters.py +++ b/lnst/Common/Parameters.py @@ -120,6 +120,24 @@ class DictParam(Param): else: return value
+class ListParam(Param): + def __init__(self, type=None, **kwargs): + self._type = type + super(ListParam, self).__init__(**kwargs) + + def type_check(self, value): + if not isinstance(value, list): + raise ParamError("Value must be a List. Not {}".format(type(value))) + + if self._type is not None: + for item in value: + try: + self._type.type_check(item) + except ParamError as e: + raise ParamError("Value {} failed type check:\n{}" + .format(str(e))) + return value + class Parameters(object): def __init__(self): self._attrs = {}
From: Ondrej Lichtner olichtne@redhat.com
Should have an empty default value if the parameter wasn't set to anything.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index 3b5777d..d673164 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -128,6 +128,8 @@ class IperfClient(IperfBase):
if "parallel" in self.params: parallel = "-P {:d}".format(self.params.parallel) + else: + parallel = ""
if self.params.udp: test = "--udp"
From: Ondrej Lichtner olichtne@redhat.com
Returns the estimated time required to complete the run method. Currently the estimate is just the test duration + 5 seconds which is a "safe" estimated overhead required for everything to start correctly.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index d673164..89f05a8 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -105,6 +105,10 @@ class IperfClient(IperfBase): if self.params.udp and self.params.sctp: raise TestModuleError("Parameters udp and sctp are mutually exclusive!")
+ def runtime_estimate(self): + _duration_overhead = 5 + return (self.params.duration + _duration_overhead) + def _compose_cmd(self): port = ""
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index 89f05a8..970d994 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -1,11 +1,7 @@ import logging -import errno -import re -import signal -import time import subprocess import json -from lnst.Common.Parameters import IntParam, IpParam, StrParam, Param, BoolParam +from lnst.Common.Parameters import IntParam, IpParam, StrParam, BoolParam from lnst.Common.Parameters import HostnameOrIpParam from lnst.Common.Utils import is_installed from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError
From: Ondrej Lichtner olichtne@redhat.com
Instead of waiting forever by default, we should wait for the DEFAULT_TIMEOUT amount, and offer the option to wait for forever when explicitly chosen. If something is broken, freezing forever due to unlimited wait is usually not what the test developer intended/expected.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index f1feae6..89b8451 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -14,6 +14,7 @@ olichtne@redhat.com (Ondrej Lichtner) import logging import signal from lnst.Common.JobError import JobError +from lnst.Common.NetTestCommand import DEFAULT_TIMEOUT from lnst.Tests.BaseTestModule import BaseTestModule from lnst.Controller.RecipeResults import ResultLevel
@@ -145,13 +146,14 @@ class Job(object): else: return False
- def wait(self, timeout=0): + def wait(self, timeout=DEFAULT_TIMEOUT): """waits for the Job to finish for the specified amount of time
Args: timeout -- integer value indicating how long to wait for. - Default is 0, means wait forever. Don't use for infinitelly - running Jobs. + Default is DEFAULT_TIMEOUT. + Use zero to wait forever. Don't use for infinitelly running + jobs... If non-zero LNST uses a timed SIGALARM signal to return from this method. Returns:
From: Ondrej Lichtner olichtne@redhat.com
The level attribute specifies the importance of the Result object that is used either for filtering or formatting purposes when processing the recipe results.
The data_level attribute is an extension of that by specifying the importance of the data provided with the result. It is used by the RunSummaryFormatter to filter out the data provided with the result.
The default for the Base class is ResultLevel.DEBUG same as the level attribute.
For the JobResult class it's always level+1, for ease of use when formatting results: * choose filter level -> show results * choose filter level + 1 -> show results and their data
For the Result class used by user, the default is ResultLevel.IMPORTANT+1 (so level+1 same as the JobResult), but the user has the ability to change this when calling Recipe.add_result
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Recipe.py | 6 ++++-- lnst/Controller/RecipeResults.py | 21 +++++++++++++++++++-- lnst/Controller/RunSummaryFormatter.py | 3 ++- 3 files changed, 25 insertions(+), 5 deletions(-)
diff --git a/lnst/Controller/Recipe.py b/lnst/Controller/Recipe.py index 080dd46..5a0a347 100644 --- a/lnst/Controller/Recipe.py +++ b/lnst/Controller/Recipe.py @@ -139,8 +139,10 @@ class BaseRecipe(object): else: return None
- def add_result(self, success, description="", data=None): - self.current_run.add_result(Result(success, description, data)) + def add_result(self, success, description="", data=None, + level=None, data_level=None): + self.current_run.add_result(Result(success, description, data, + level, data_level))
class RecipeRun(object): def __init__(self, match, desc=None): diff --git a/lnst/Controller/RecipeResults.py b/lnst/Controller/RecipeResults.py index 6b42a83..05ce5fb 100644 --- a/lnst/Controller/RecipeResults.py +++ b/lnst/Controller/RecipeResults.py @@ -49,6 +49,10 @@ class BaseResult(object): def level(self): return ResultLevel.DEBUG
+ @property + def data_level(self): + return ResultLevel.DEBUG + class JobResult(BaseResult): """Base class for storing result data of Jobs
@@ -66,6 +70,10 @@ class JobResult(BaseResult): def level(self): return self.job.level
+ @BaseResult.data_level.getter + def data_level(self): + return self.job.level+1 + class JobStartResult(JobResult): """Generated automatically when a Job is succesfully started on a slave""" @BaseResult.short_desc.getter @@ -98,12 +106,17 @@ class Result(BaseResult): Will be created when the tester calls the Recipe interface for adding results.""" def __init__(self, success, short_desc="", data=None, - level=ResultLevel.IMPORTANT): + level=None, data_level=None): super(Result, self).__init__(success)
self._short_desc = short_desc self._data = data - self._level = level + self._level = (level + if isinstance(level, ResultLevel) + else ResultLevel.IMPORTANT) + self._data_level = (data_level + if isinstance(data_level, ResultLevel) + else ResultLevel.IMPORTANT+1)
@BaseResult.short_desc.getter def short_desc(self): @@ -116,3 +129,7 @@ class Result(BaseResult): @BaseResult.level.getter def level(self): return self._level + + @BaseResult.data_level.getter + def data_level(self): + return self._data_level diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index 888b897..a5d505e 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -107,7 +107,8 @@ class RunSummaryFormatter(object): src = self._format_source(res), desc = res.short_desc))
- output_lines.extend(self._format_data(res.data)) + if res.data_level <= self._level: + output_lines.extend(self._format_data(res.data))
output_lines.append("Overall result of this Run: {}". format(self._format_success(overall_result)))
From: Ondrej Lichtner olichtne@redhat.com
Removing the tab between the result success and the result source to achieve a nicer spacing.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RunSummaryFormatter.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index a5d505e..a90efe4 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -102,7 +102,7 @@ class RunSummaryFormatter(object): except IndexError: pass
- output_lines.append("{res}\t{src}\t{desc}".format( + output_lines.append("{res} {src}\t{desc}".format( res = self._format_success(res.success), src = self._format_source(res), desc = res.short_desc))
From: Ondrej Lichtner olichtne@redhat.com
The prepare_job method will create and return a lnst.Controller.Job object the same as the Namespace.run method but won't send the command to start it to the Slave. Instead the tester can call the Job.start method himself to send the start command later.
This could be used to achieve better grouping of time related job starts, currently it would only be useful if you intend to do resource intensive work between Namespace.run calls, but I can imagine extending this functionality to actually provide a more intelligent synchronized start of multiple jobs.
Consider this just an idea, might be removed later.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 9 +++++++++ lnst/Controller/Namespace.py | 5 +++++ 2 files changed, 14 insertions(+)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 89b8451..0e17934 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -146,6 +146,15 @@ class Job(object): else: return False
+ def start(self, bg=False, timeout=DEFAULT_TIMEOUT): + self._netns._machine.run_job(self) + + if not bg: + if not self.wait(timeout): + logging.debug("Killing timed-out job") + self.kill() + return self + def wait(self, timeout=DEFAULT_TIMEOUT): """waits for the Job to finish for the specified amount of time
diff --git a/lnst/Controller/Namespace.py b/lnst/Controller/Namespace.py index af5bda1..b8bb2f7 100644 --- a/lnst/Controller/Namespace.py +++ b/lnst/Controller/Namespace.py @@ -80,6 +80,11 @@ class Namespace(object): returns a string name for any other namespace""" return self._name
+ def prepare_job(self, what, fail=False, json=False, desc=None, + job_level=ResultLevel.DEBUG): + return Job(self, what, expect=not fail, json=json, desc=desc, + level=job_level) + def run(self, what, bg=False, fail=False, timeout=DEFAULT_TIMEOUT, json=False, desc=None, job_level=ResultLevel.DEBUG): """
From: Ondrej Lichtner olichtne@redhat.com
This should be mostly useful for accessing Test module object instances when the job isn't started yet, e.g. changing the parameters before starting a prepared job. Or to figure out the estimated runtime before calling job.wait.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 0e17934..d3786be 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -65,6 +65,10 @@ class Job(object): raise Exception("Id already set") self._id = val
+ @property + def what(self): + return self._what + @property def host(self): """the initial namespace of the host the job is running on"""
From: Ondrej Lichtner olichtne@redhat.com
This test module can be used to periodically sample the /proc/stat file for statistics and report back a list of differences between the individual samples as well as the raw data.
Can be used to calculate per-cpu and system wide cpu utilization.
Currently the test module samples until interrupted, so it should be run in the background and stopped with a job.kill(signal.SIGINT) call.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/CPUStatMonitor.py | 113 +++++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 lnst/Tests/CPUStatMonitor.py
diff --git a/lnst/Tests/CPUStatMonitor.py b/lnst/Tests/CPUStatMonitor.py new file mode 100644 index 0000000..9b4a104 --- /dev/null +++ b/lnst/Tests/CPUStatMonitor.py @@ -0,0 +1,113 @@ +import re +import time +import signal +from time import sleep +from lnst.Common.Parameters import IntParam +from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError, InterruptException + +def sigint_handler(signum, frame): + raise InterruptException() + +class CPUStatMonitor(BaseTestModule): + interval = IntParam(default=1000) + + def run(self): + self._res_data = {} + + raw_samples = [] + try: + old_handler = signal.signal(signal.SIGINT, sigint_handler) + with open("/proc/stat") as stat: + while True: + stat.seek(0) + timestamp = time.time() + stat_lines = "".join(stat.readlines()) + raw_samples.append({ + "timestamp": timestamp, + "stat": stat_lines + }) + sleep(self.params.interval / float(1000)) + except InterruptException: + pass + finally: + signal.signal(signal.SIGINT, old_handler) + + self._res_data["raw_data"] = raw_samples + self._res_data["data"] = self._process_samples(raw_samples) + + return True + + def _process_samples(self, samples): + result = [] + prev_sample = None + for sample in samples: + if prev_sample is not None: + parsed_prev = self._parse_stat_lines(prev_sample["stat"]) + parsed_cur = self._parse_stat_lines(sample["stat"]) + + interval = self._subtract_nested_dicts(parsed_cur, parsed_prev) + interval["duration"] = (sample["timestamp"] - + prev_sample["timestamp"]) + + result.append(interval) + + prev_sample = sample + return result + + def _subtract_nested_dicts(self, first, second): + result = {} + for key, val in first.items(): + if isinstance(val, dict): + result[key] = self._subtract_nested_dicts(val, second[key]) + else: + result[key] = val - second[key] + return result + + def _parse_stat_lines(self, stat): + result = {} + for line in stat.split("\n"): + cpu_data = self._parse_cpu_stats(line) + if cpu_data: + result[cpu_data[0]] = cpu_data[1] + continue + + intr_data = self._parse_intr_stats(line) + if intr_data: + result[intr_data[0]] = intr_data[1] + continue + + m = re.match(r"^(.*?) (\d+)$", line) + if m: + result[m.group(1)] = int(m.group(2)) + return result + + def _parse_cpu_stats(self, stat_line): + result = {} + m = re.match(r"^(cpu\d*)\s+(\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+)$", + stat_line) + if m: + cpu = m.group(1) + result["user"] = int(m.group(2)) + result["nice"] = int(m.group(3)) + result["system"] = int(m.group(4)) + result["idle"] = int(m.group(5)) + result["iowait"] = int(m.group(6)) + result["irq"] = int(m.group(7)) + result["softirq"] = int(m.group(8)) + result["steal"] = int(m.group(9)) + result["guest"] = int(m.group(10)) + result["guest_nice"] = int(m.group(11)) + return cpu, result + else: + return None + + def _parse_intr_stats(self, stat_line): + result = {} + m = re.match(r"^(intr|softirq) (\d+) (.*)$", stat_line) + if m: + result["total"] = int(m.group(2)) + for i, irq in enumerate(m.group(3).split(" ")): + result[i] = int(irq) + return m.group(1), result + else: + return None
From: Ondrej Lichtner olichtne@redhat.com
Refactoring the Perf and PerfResult modules into a separate package lnst.RecipeCommon.Perf that will host everything related to the Perf recipe template.
I'm also considering later moving this into the lnst.Recipes package where it might make more sense as an actual recipe, with an example test method that will show off the basic usage of the template.
Changes summary: * moved lnst/RecipeCommon/Perf.py to lnst/RecipeCommon/Perf/Recipe.py * renamed PerfTestAndEvaluate class to just Recipe since the "Perf" part is obvious from the namespace * PerfConf class renamed to RecipeConf * RecipeConf only contains configuration for the Recipe - the list of measurements to do and the number of repeats for these * PerfMeasurementTool removed, this will be replaced by the Measurements class hierarchy added in the following commit * added RecipeResults class to store aggregated measurement results associated with the current Recipe configuration
* moved lnst/RecipeCommon/PerfResults.py to lnst.RecipeCommon/Perf/Results.py * removed StreamPerf, MultiStreamPerf, MultiRunPerf and replaced them with SequentialPerfResult and ParallelPerfResult to improve code reuse * added the PerfResult base class * set PerfInterval string formatting precision to 2 decimals * improved code reuse for item validation in PerfList class
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/Perf.py | 120 ------------------ lnst/RecipeCommon/Perf/Recipe.py | 73 +++++++++++ .../{PerfResult.py => Perf/Results.py} | 65 ++++------ lnst/RecipeCommon/Perf/__init__.py | 0 lnst/Recipes/ENRT/BaseEnrtRecipe.py | 20 +-- 5 files changed, 106 insertions(+), 172 deletions(-) delete mode 100644 lnst/RecipeCommon/Perf.py create mode 100644 lnst/RecipeCommon/Perf/Recipe.py rename lnst/RecipeCommon/{PerfResult.py => Perf/Results.py} (72%) create mode 100644 lnst/RecipeCommon/Perf/__init__.py
diff --git a/lnst/RecipeCommon/Perf.py b/lnst/RecipeCommon/Perf.py deleted file mode 100644 index 97aa0f1..0000000 --- a/lnst/RecipeCommon/Perf.py +++ /dev/null @@ -1,120 +0,0 @@ -from lnst.Controller.Recipe import BaseRecipe -from lnst.RecipeCommon.PerfResult import MultiRunPerf - -class PerfConf(object): - def __init__(self, - perf_tool, - test_type, - generator, generator_bind, - receiver, receiver_bind, - msg_size, duration, iterations, streams): - self._perf_tool = perf_tool - self._test_type = test_type - - self._generator = generator - self._generator_bind = generator_bind - self._receiver = receiver - self._receiver_bind = receiver_bind - - self._msg_size = msg_size - self._duration = duration - self._iterations = iterations - self._streams = streams - - @property - def perf_tool(self): - return self._perf_tool - - @property - def generator(self): - return self._generator - - @property - def generator_bind(self): - return self._generator_bind - - @property - def receiver(self): - return self._receiver - - @property - def receiver_bind(self): - return self._receiver_bind - - @property - def test_type(self): - return self._test_type - - @property - def msg_size(self): - return self._msg_size - - @property - def duration(self): - return self._duration - - @property - def iterations(self): - return self._iterations - - @property - def streams(self): - return self._streams - -class PerfMeasurementTool(object): - @staticmethod - def perf_measure(perf_conf): - raise NotImplementedError - -class PerfTestAndEvaluate(BaseRecipe): - def perf_test(self, perf_conf): - generator_measurements = MultiRunPerf() - receiver_measurements = MultiRunPerf() - for i in range(perf_conf.iterations): - tx, rx = perf_conf.perf_tool.perf_measure(perf_conf) - - if tx: - generator_measurements.append(tx) - if rx: - receiver_measurements.append(rx) - - return generator_measurements, receiver_measurements - - def perf_evaluate_and_report(self, perf_conf, results, baseline): - self.perf_evaluate(perf_conf, results, baseline) - - self.perf_report(perf_conf, results, baseline) - - def perf_evaluate(self, perf_conf, results, baseline): - generator, receiver = results - - if generator.average > 0: - self.add_result(True, "Generator reported non-zero throughput") - else: - self.add_result(False, "Generator reported zero throughput") - - if receiver.average > 0: - self.add_result(True, "Receiver reported non-zero throughput") - else: - self.add_result(False, "Receiver reported zero throughput") - - - def perf_report(self, perf_conf, results, baseline): - generator, receiver = results - - self.add_result( - True, - "Generator measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second" - .format(tput=generator.average, - deviation=generator.std_deviation, - percentage=(generator.std_deviation/generator.average) * 100, - unit=generator.unit), - data = generator) - self.add_result( - True, - "Receiver measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second" - .format(tput=receiver.average, - deviation=receiver.std_deviation, - percentage=(receiver.std_deviation/receiver.average) * 100, - unit=receiver.unit), - data = receiver) diff --git a/lnst/RecipeCommon/Perf/Recipe.py b/lnst/RecipeCommon/Perf/Recipe.py new file mode 100644 index 0000000..e305310 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Recipe.py @@ -0,0 +1,73 @@ +from lnst.Controller.Recipe import BaseRecipe +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult + +class RecipeConf(object): + def __init__(self, measurements, iterations): + self._measurements = measurements + self._iterations = iterations + + @property + def measurements(self): + return self._measurements + + @property + def iterations(self): + return self._iterations + +class RecipeResults(object): + def __init__(self, perf_conf): + self._perf_conf = perf_conf + self._results = {} + + @property + def perf_conf(self): + return self._perf_conf + + @property + def results(self): + return self._results + + def add_measurement_results(self, measurement, new_results): + aggregated_results = self._results.get(measurement, None) + aggregated_results = measurement.aggregate_results( + aggregated_results, new_results) + self._results[measurement] = aggregated_results + +class Recipe(BaseRecipe): + def perf_test(self, recipe_conf): + results = RecipeResults(recipe_conf) + + for i in range(recipe_conf.iterations): + run_results = [] + for measurement in recipe_conf.measurements: + measurement.start() + for measurement in reversed(recipe_conf.measurements): + measurement.finish() + for measurement in recipe_conf.measurements: + measurement_results = measurement.collect_results() + results.add_measurement_results( + measurement, measurement_results) + + return results + + def perf_report_and_evaluate(self, results): + self.perf_report(results) + + self.perf_evaluate(results) + + def perf_report(self, recipe_results): + if not recipe_results: + self.add_result(False, "No results available to report.") + return + + for measurement, results in recipe_results.results.items(): + measurement.report_results(self, results) + + def perf_evaluate(self, recipe_results): + if not recipe_results: + self.add_result(False, "No results available to evaluate.") + return + + for measurement, results in recipe_results.results.items(): + measurement.evaluate_results(self, results) diff --git a/lnst/RecipeCommon/PerfResult.py b/lnst/RecipeCommon/Perf/Results.py similarity index 72% rename from lnst/RecipeCommon/PerfResult.py rename to lnst/RecipeCommon/Perf/Results.py index f48fd0a..4591447 100644 --- a/lnst/RecipeCommon/PerfResult.py +++ b/lnst/RecipeCommon/Perf/Results.py @@ -10,7 +10,20 @@ class PerfStatMixin(object): def std_deviation(self): return std_deviation([i.average for i in self])
-class PerfInterval(PerfStatMixin): +class PerfResult(PerfStatMixin): + @property + def value(self): + raise NotImplementedError() + + @property + def duration(self): + raise NotImplementedError() + + @property + def unit(self): + raise NotImplementedError() + +class PerfInterval(PerfResult): def __init__(self, value, duration, unit): self._value = value self._duration = duration @@ -33,20 +46,13 @@ class PerfInterval(PerfStatMixin): return 0
def __str__(self): - return "{} {} in {} seconds".format( - self.value, self.unit, self.duration) + return "{:.2f} {} in {:.2f} seconds".format( + float(self.value), self.unit, float(self.duration))
class PerfList(list): - _sub_type = None - def __init__(self, iterable=[]): - unit = None - for i, item in enumerate(iterable): - if not isinstance(item, self._sub_type): - raise LnstError("{} only accepts {} objects." - .format(self.__class__.__name__, - self._sub_type.__name__)) + self._validate_item_type(item)
if i == 0: unit = item.unit @@ -57,14 +63,17 @@ class PerfList(list): super(PerfList, self).__init__(iterable)
def _validate_item(self, item): - if not isinstance(item, self._sub_type): - raise LnstError("{} only accepts {} objects." - .format(self.__class__.__name__, - self._sub_type.__name__)) + self._validate_item_type(item)
if len(self) > 0 and item.unit != self[0].unit: raise LnstError("PerfList items must have the same unit.")
+ def _validate_item_type(self, item): + if (not isinstance(item, PerfInterval) and + not isinstance(item, PerfList)): + raise LnstError("{} only accepts PerfInterval or PerfList objects." + .format(self.__class__.__name__)) + def append(self, item): self._validate_item(item)
@@ -104,9 +113,7 @@ class PerfList(list):
super(PerfList, self).__setslice__(i, j, iterable)
-class StreamPerf(PerfList, PerfStatMixin): - _sub_type = PerfInterval - +class SequentialPerfResult(PerfResult, PerfList): @property def value(self): return sum([i.value for i in self]) @@ -122,9 +129,7 @@ class StreamPerf(PerfList, PerfStatMixin): else: return None
-class MultiStreamPerf(PerfList, PerfStatMixin): - _sub_type = StreamPerf - +class ParallelPerfResult(PerfResult, PerfList): @property def value(self): return sum([i.value for i in self]) @@ -139,21 +144,3 @@ class MultiStreamPerf(PerfList, PerfStatMixin): return self[0].unit else: return None - -class MultiRunPerf(PerfList, PerfStatMixin): - _sub_type = MultiStreamPerf - - @property - def value(self): - return sum([i.value for i in self]) - - @property - def duration(self): - return sum([i.duration for i in self]) - - @property - def unit(self): - if len(self) > 0: - return self[0].unit - else: - return None diff --git a/lnst/RecipeCommon/Perf/__init__.py b/lnst/RecipeCommon/Perf/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index 9e2b674..a26d999 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -6,7 +6,8 @@ from lnst.Common.IpAddress import AF_INET, AF_INET6 from lnst.Controller.Recipe import BaseRecipe
from lnst.RecipeCommon.Ping import PingTestAndEvaluate, PingConf -from lnst.RecipeCommon.Perf import PerfTestAndEvaluate, PerfConf +from lnst.RecipeCommon.Perf.Recipe import Recipe as PerfRecipe +from lnst.RecipeCommon.Perf.Recipe import RecipeConf as PerfRecipeConf from lnst.RecipeCommon.IperfMeasurementTool import IperfMeasurementTool
class EnrtConfiguration(object): @@ -61,7 +62,7 @@ class EnrtSubConfiguration(object): def offload_settings(self, value): self._offload_settings = value
-class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): +class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe): ip_versions = Param(default=("ipv4", "ipv6")) perf_tests = Param(default=("tcp_stream", "udp_stream", "sctp_stream"))
@@ -101,7 +102,7 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): for perf_config in self.generate_perf_configurations(main_config, sub_config): result = self.perf_test(perf_config) - self.perf_evaluate_and_report(perf_config, result, baseline=None) + self.perf_report_and_evaluate(result)
self.remove_sub_configuration(main_config, sub_config)
@@ -187,16 +188,9 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): server_bind = server_nic.ips_filter(family=family)[0]
for perf_test in self.params.perf_tests: - yield PerfConf(perf_tool = self.params.perf_tool, - test_type = perf_test, - generator = client_netns, - generator_bind = client_bind, - receiver = server_netns, - receiver_bind = server_bind, - msg_size = self.params.perf_msg_size, - duration = self.params.perf_duration, - iterations = self.params.perf_iterations, - streams = self.params.perf_streams) + yield PerfRecipeConf( + measurements=[ ], + iterations=self.params.perf_iterations)
def _pin_dev_interrupts(self, dev, cpu): netns = dev.netns
From: Ondrej Lichtner olichtne@redhat.com
This is the second part of the refactorization of the PerfAndEvaluate recipe workflow. In generic terms it introduces a new package that will store a class hierarchy for various Measurement types and implementations.
At the base level there is the BaseMeasurement class and module that defines the interface that all the other classes have to implement. This interface is understood and relied upon by the lnst.RecipeCommon.Perf.Recipe class that uses it.
The refactorization includes a move+rename of the IperfMeasurementTool and TRexMeasurementTool into the new IperfFlowMeasurement and TRexMeasurement classes/modules. And the addition of the new StatCPUMeasurement class that uses the CPUStatMonitor test module to measure cpu utilization.
Finally these changes are added to the BaseEnrtRecipe so that everything stays working.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/IperfMeasurementTool.py | 83 ------- .../Perf/Measurements/BaseCPUMeasurement.py | 109 ++++++++++ .../Perf/Measurements/BaseFlowMeasurement.py | 202 ++++++++++++++++++ .../Perf/Measurements/BaseMeasurement.py | 29 +++ .../Perf/Measurements/IperfFlowMeasurement.py | 157 ++++++++++++++ .../Perf/Measurements/MeasurementError.py | 4 + .../Perf/Measurements/StatCPUMeasurement.py | 88 ++++++++ .../Measurements/TRexMeasurement.py} | 0 .../Perf/Measurements/__init__.py | 3 + lnst/Recipes/ENRT/BaseEnrtRecipe.py | 27 ++- 10 files changed, 614 insertions(+), 88 deletions(-) delete mode 100644 lnst/RecipeCommon/IperfMeasurementTool.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/MeasurementError.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py rename lnst/RecipeCommon/{TRexMeasurementTool.py => Perf/Measurements/TRexMeasurement.py} (100%) create mode 100644 lnst/RecipeCommon/Perf/Measurements/__init__.py
diff --git a/lnst/RecipeCommon/IperfMeasurementTool.py b/lnst/RecipeCommon/IperfMeasurementTool.py deleted file mode 100644 index 9f2e49e..0000000 --- a/lnst/RecipeCommon/IperfMeasurementTool.py +++ /dev/null @@ -1,83 +0,0 @@ -import time -import signal -from lnst.Common.IpAddress import ipaddress -from lnst.Controller.Recipe import RecipeError -from lnst.Controller.RecipeResults import ResultLevel -from lnst.RecipeCommon.Perf import PerfConf, PerfMeasurementTool -from lnst.RecipeCommon.PerfResult import PerfInterval, StreamPerf -from lnst.RecipeCommon.PerfResult import MultiStreamPerf -from lnst.Tests.Iperf import IperfClient, IperfServer - -class IperfMeasurementTool(PerfMeasurementTool): - @staticmethod - def perf_measure(perf_conf): - _iperf_duration_overhead = 5 - - server_params = dict(bind = ipaddress(perf_conf.receiver_bind), - oneoff = True) - - client_params = dict(server = server_params["bind"], - duration = perf_conf.duration, - parallel = perf_conf.streams) - - if perf_conf.test_type == "tcp_stream": - #tcp stream is the default for iperf3 - pass - elif perf_conf.test_type == "udp_stream": - client_params["udp"] = True - elif perf_conf.test_type == "sctp_stream": - client_params["sctp"] = True - else: - raise RecipeError("Unsupported test type '{}'" - .format(perf_conf.test_type)) - - server = IperfServer(**server_params) - client = IperfClient(**client_params) - - server_host = perf_conf.receiver - client_host = perf_conf.generator - result = None - try: - server_job = server_host.run(server, bg=True, - job_level=ResultLevel.NORMAL) - - #wait for server to start, TODO can this be improved? - time.sleep(2) - - duration = client.params.duration + _iperf_duration_overhead - client_job = client_host.run(client, timeout=duration, - job_level=ResultLevel.NORMAL) - - server_job.wait(timeout=5) - finally: - if client_job and not client_job.finished: - client_job.kill() - - if server_job and not server_job.finished: - server_job.kill() - - #TODO return something if not passed - if client_job.passed: - client_result = MultiStreamPerf() - for i in client_job.result["data"]["end"]["streams"]: - client_result.append(StreamPerf()) - - for interval in client_job.result["data"]["intervals"]: - for i, stream in enumerate(interval["streams"]): - client_result[i].append(PerfInterval(stream["bytes"] * 8, - stream["seconds"], - "bits")) - - #TODO return something if not passed - if server_job.passed: - server_result = MultiStreamPerf() - for i in server_job.result["data"]["end"]["streams"]: - server_result.append(StreamPerf()) - - for interval in server_job.result["data"]["intervals"]: - for i, stream in enumerate(interval["streams"]): - server_result[i].append(PerfInterval(stream["bytes"] * 8, - stream["seconds"], - "bits")) - - return client_result, server_result diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py new file mode 100644 index 0000000..2507f3c --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py @@ -0,0 +1,109 @@ +import signal +from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult + +class CPUMeasurementResults(object): + def __init__(self, host, cpu): + self._host = host + self._cpu = cpu + + @property + def host(self): + return self._host + + @property + def cpu(self): + return self._cpu + + @property + def utilization(self): + raise NotImplementedError() + +class AggregatedCPUMeasurementResults(CPUMeasurementResults): + def __init__(self, host, cpu): + super(AggregatedCPUMeasurementResults, self).__init__(host, cpu) + self._individual_results = [] + + @property + def individual_results(self): + return self._individual_results + + @property + def utilization(self): + return SequentialPerfResult([i.utilization + for i in self.individual_results]) + + def add_results(self, results): + if results is None: + return + elif isinstance(results, AggregatedCPUMeasurementResults): + self.individual_results.extend(results.individual_results) + elif isinstance(results, CPUMeasurementResults): + self.individual_results.append(results) + else: + raise MeasurementError("Adding incorrect results.") + +class BaseCPUMeasurement(BaseMeasurement): + @classmethod + def aggregate_results(cls, old, new): + aggregated = [] + if old is None: + old = [None] * len(new) + for old_measurements, new_measurements in zip(old, new): + aggregated.append(cls._aggregate_hostcpu_results( + old_measurements, new_measurements)) + return aggregated + + @classmethod + def report_results(cls, recipe, results): + results_by_host = cls._divide_results_by_host(results) + for host_results in results_by_host.values(): + cls._report_host_results(recipe, host_results) + + @classmethod + def evaluate_results(cls, recipe, results): + #TODO split off into a separate evaluator class + for result in results: + recipe.add_result(True, + "Base CPU evaluation for host {}, cpu {}".format( + result.host.hostid, result.cpu)) + + @classmethod + def _divide_results_by_host(cls, results): + results_by_host = {} + for result in results: + if result.host not in results_by_host: + results_by_host[result.host] = [] + results_by_host[result.host].append(result) + return results_by_host + + @classmethod + def _report_host_results(cls, recipe, results): + if not len(results): + return + + cpu_data = {} + desc = ["CPU Utilization on host {host}:".format( + host=results[0].host.hostid)] + for result in results: + utilization = result.utilization + cpu_data[result.cpu] = utilization + desc.append("cpu '{cpu}': {average:.2f} +-{deviation:.2f} {unit} per second" + .format(cpu=result.cpu, + average=utilization.average, + deviation=utilization.std_deviation, + unit=utilization.unit)) + + recipe.add_result(True, "\n".join(desc), data=cpu_data) + + @classmethod + def _aggregate_hostcpu_results(cls, old, new): + if (old is not None and + (old.host is not new.host or old.cpu != new.cpu)): + raise MeasurementError("Aggregating incompatible CPU Results") + + new_result = AggregatedCPUMeasurementResults(new.host, new.cpu) + new_result.add_results(old) + new_result.add_results(new) + return new_result diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py new file mode 100644 index 0000000..203e104 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py @@ -0,0 +1,202 @@ +import signal +from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult + +class Flow(object): + def __init__(self, + type, + generator, generator_bind, + receiver, receiver_bind, + msg_size, duration, parallel_streams): + self._type = type + + self._generator = generator + self._generator_bind = generator_bind + self._receiver = receiver + self._receiver_bind = receiver_bind + + self._msg_size = msg_size + self._duration = duration + self._parallel_streams = parallel_streams + + @property + def type(self): + return self._type + + @property + def generator(self): + return self._generator + + @property + def generator_bind(self): + return self._generator_bind + + @property + def receiver(self): + return self._receiver + + @property + def receiver_bind(self): + return self._receiver_bind + + @property + def msg_size(self): + return self._msg_size + + @property + def duration(self): + return self._duration + + @property + def parallel_streams(self): + return self._parallel_streams + +class FlowMeasurementResults(object): + def __init__(self, flow): + self._flow = flow + self._generator_results = None + self._generator_cpu_stats = None + self._receiver_results = None + self._receiver_cpu_stats = None + + @property + def flow(self): + return self._flow + + @property + def generator_results(self): + return self._generator_results + + @generator_results.setter + def generator_results(self, value): + self._generator_results = value + + @property + def generator_cpu_stats(self): + return self._generator_cpu_stats + + @generator_cpu_stats.setter + def generator_cpu_stats(self, value): + self._generator_cpu_stats = value + + @property + def receiver_results(self): + return self._receiver_results + + @receiver_results.setter + def receiver_results(self, value): + self._receiver_results = value + + @property + def receiver_cpu_stats(self): + return self._receiver_cpu_stats + + @receiver_cpu_stats.setter + def receiver_cpu_stats(self, value): + self._receiver_cpu_stats = value + +class AggregatedFlowMeasurementResults(FlowMeasurementResults): + def __init__(self, flow): + self._flow = flow + self._generator_results = SequentialPerfResult() + self._generator_cpu_stats = SequentialPerfResult() + self._receiver_results = SequentialPerfResult() + self._receiver_cpu_stats = SequentialPerfResult() + self._individual_results = [] + + @property + def individual_results(self): + return self._individual_results + + def add_results(self, results): + if results is None: + return + elif isinstance(results, AggregatedFlowMeasurementResults): + self.individual_results.extend(results.individual_results) + self.generator_results.extend(results.generator_results) + self.generator_cpu_stats.extend(results.generator_cpu_stats) + self.receiver_results.extend(results.receiver_results) + self.receiver_cpu_stats.extend(results.receiver_cpu_stats) + elif isinstance(results, FlowMeasurementResults): + self.individual_results.append(results) + self.generator_results.append(results.generator_results) + self.generator_cpu_stats.append(results.generator_cpu_stats) + self.receiver_results.append(results.receiver_results) + self.receiver_cpu_stats.append(results.receiver_cpu_stats) + else: + raise MeasurementError("Adding incorrect results.") + +class BaseFlowMeasurement(BaseMeasurement): + @classmethod + def report_results(cls, recipe, results): + for flow_results in results: + cls._report_flow_results(recipe, flow_results) + + @classmethod + def evaluate_results(cls, recipe, results): + #TODO split off into a separate evaluator class + for flow_results in results: + if flow_results.generator_results.average > 0: + recipe.add_result(True, "Generator reported non-zero throughput") + else: + recipe.add_result(False, "Generator reported zero throughput") + + if flow_results.receiver_results.average > 0: + recipe.add_result(True, "Receiver reported non-zero throughput") + else: + recipe.add_result(False, "Receiver reported zero throughput") + + @classmethod + def _report_flow_results(cls, recipe, flow_results): + generator = flow_results.generator_results + generator_cpu = flow_results.generator_cpu_stats + receiver = flow_results.receiver_results + receiver_cpu = flow_results.receiver_cpu_stats + + desc = [] + desc.append("Generator measured throughput: {tput:.2f} +-{deviation:.2f}({percentage:.2f}%) {unit} per second." + .format(tput=generator.average, + deviation=generator.std_deviation, + percentage=(generator.std_deviation/generator.average) * 100, + unit=generator.unit)) + desc.append("Generator process CPU data: {cpu:.2f} +-{cpu_deviation:.2f} {cpu_unit} per second." + .format(cpu=generator_cpu.average, + cpu_deviation=generator_cpu.std_deviation, + cpu_unit=generator_cpu.unit)) + desc.append("Receiver measured throughput: {tput:.2f} +-{deviation:.2f}({percentage:.2}%) {unit} per second." + .format(tput=receiver.average, + deviation=receiver.std_deviation, + percentage=(receiver.std_deviation/receiver.average) * 100, + unit=receiver.unit)) + desc.append("Receiver process CPU data: {cpu:.2f} +-{cpu_deviation:.2f} {cpu_unit} per second." + .format(cpu=receiver_cpu.average, + cpu_deviation=receiver_cpu.std_deviation, + cpu_unit=receiver_cpu.unit)) + + #TODO add flow description + recipe.add_result(True, "\n".join(desc), data = dict( + generator_flow_data=generator, + generator_cpu_data=generator_cpu, + receiver_flow_data=receiver, + receiver_cpu_data=receiver_cpu)) + + @classmethod + def aggregate_results(cls, old, new): + aggregated = [] + if old is None: + old = [None] * len(new) + for old_flow, new_flow in zip(old, new): + aggregated.append(cls._aggregate_flows(old_flow, new_flow)) + return aggregated + + @classmethod + def _aggregate_flows(cls, old_flow, new_flow): + if old_flow is not None and old_flow.flow is not new_flow.flow: + raise MeasurementError("Aggregating incompatible Flows") + + new_result = AggregatedFlowMeasurementResults(new_flow.flow) + + new_result.add_results(old_flow) + new_result.add_results(new_flow) + return new_result diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py new file mode 100644 index 0000000..8059308 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py @@ -0,0 +1,29 @@ +class BaseMeasurement(object): + def __init__(self, conf): + self._conf = conf + + @property + def conf(self): + return self._conf + + def start(self): + raise NotImplementedError() + + def finish(self): + raise NotImplementedError() + + def collect_results(self): + raise NotImplementedError() + + @classmethod + def report_results(recipe, results): + raise NotImplementedError() + + @classmethod + def evaluate_results(recipe, results): + #TODO split off into separate evaluator classes + raise NotImplementedError() + + @classmethod + def aggregate_results(first, second): + raise NotImplementedError() diff --git a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py new file mode 100644 index 0000000..69ac1c5 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py @@ -0,0 +1,157 @@ +import time + +from lnst.Common.IpAddress import ipaddress + +from lnst.Controller.Recipe import RecipeError +from lnst.Controller.RecipeResults import ResultLevel + +from lnst.RecipeCommon.Perf.Results import PerfInterval +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import BaseFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import FlowMeasurementResults + +from lnst.Tests.Iperf import IperfClient, IperfServer + +class IperfFlowMeasurement(BaseFlowMeasurement): + def __init__(self, *args): + super(IperfFlowMeasurement, self).__init__(*args) + self._running_measurements = [] + self._finished_measurements = [] + + def start(self): + if len(self._running_measurements) > 0: + raise MeasurementError("Measurement already running!") + + test_flows = self._prepare_test_flows(self._conf) + + result = None + for flow in test_flows: + flow.server_job.start(bg=True) + + for flow in test_flows: + flow.client_job.start(bg=True) + + self._running_measurements = test_flows + + def finish(self): + test_flows = self._running_measurements + try: + for flow in test_flows: + client_iperf = flow.client_job.what + flow.client_job.wait(timeout=client_iperf.runtime_estimate()) + flow.server_job.wait(timeout=5) + finally: + for flow in test_flows: + if not flow.server_job.finished: + flow.server_job.kill() + if not flow.client_job.finished: + flow.client_job.kill() + + self._running_measurements = [] + self._finished_measurements = test_flows + + def collect_results(self): + test_flows = self._finished_measurements + + results = [] + for test_flow in test_flows: + flow_results = FlowMeasurementResults(test_flow.flow) + flow_results.generator_results = self._parse_job_streams( + test_flow.client_job) + flow_results.generator_cpu_stats = self._parse_job_cpu( + test_flow.client_job) + + flow_results.receiver_results = self._parse_job_streams( + test_flow.server_job) + flow_results.receiver_cpu_stats = self._parse_job_cpu( + test_flow.server_job) + + results.append(flow_results) + + return results + + def _prepare_test_flows(self, flows): + test_flows = [] + for flow in flows: + server_job = self._prepare_server(flow) + client_job = self._prepare_client(flow) + test_flow = NetworkFlowTest(flow, server_job, client_job) + test_flows.append(test_flow) + return test_flows + + def _prepare_server(self, flow): + host = flow.receiver + server_params = dict(bind = ipaddress(flow.receiver_bind), + oneoff = True) + + return host.prepare_job(IperfServer(**server_params), + job_level=ResultLevel.NORMAL) + + def _prepare_client(self, flow): + host = flow.generator + client_params = dict(server = ipaddress(flow.receiver_bind), + duration = flow.duration) + + if flow.type == "tcp_stream": + #tcp stream is the default for iperf3 + pass + elif flow.type == "udp_stream": + client_params["udp"] = True + elif flow.type == "sctp_stream": + client_params["sctp"] = True + else: + raise RecipeError("Unsupported flow type '{}'".format(flow.type)) + + if flow.parallel_streams > 1: + client_params["parallel"] = flow.parallel_streams + + if flow.msg_size: + client_params["blksize"] = flow.msg_size + + return host.prepare_job(IperfClient(**client_params), + job_level=ResultLevel.NORMAL) + + def _parse_job_streams(self, job): + result = ParallelPerfResult() + if not job.passed: + result.append(PerfInterval(0, 0, "bits")) + else: + for i in job.result["data"]["end"]["streams"]: + result.append(SequentialPerfResult()) + + for interval in job.result["data"]["intervals"]: + for i, stream in enumerate(interval["streams"]): + result[i].append(PerfInterval(stream["bytes"] * 8, + stream["seconds"], + "bits")) + return result + + def _parse_job_cpu(self, job): + if not job.passed: + return PerfInterval(0, 0, "cpu_percent") + else: + cpu_percent = job.result["data"]["end"]["cpu_utilization_percent"]["host_total"] + return PerfInterval(cpu_percent, 1, "percent") + +class NetworkFlowTest(object): + def __init__(self, flow, server_job, client_job): + self._flow = flow + self._server_job = server_job + self._client_job = client_job + + @property + def flow(self): + return self._flow + + @property + def server_job(self): + return self._server_job + + @property + def client_job(self): + return self._client_job + + @property + def duration(self): + return self._flow.duration diff --git a/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py b/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py new file mode 100644 index 0000000..66ed168 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py @@ -0,0 +1,4 @@ +from lnst.Common.LnstError import LnstError + +class MeasurementError(LnstError): + pass diff --git a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py new file mode 100644 index 0000000..14e7f73 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py @@ -0,0 +1,88 @@ +import signal + +from lnst.RecipeCommon.Perf.Results import PerfInterval +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult +from lnst.RecipeCommon.Perf.Measurements.BaseCPUMeasurement import BaseCPUMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseCPUMeasurement import CPUMeasurementResults + +from lnst.Tests.CPUStatMonitor import CPUStatMonitor + +class StatCPUMeasurementResults(CPUMeasurementResults): + def __init__(self, *args): + super(StatCPUMeasurementResults, self).__init__(*args) + self._data = {} + + def update_intervals(self, intervals): + for key, interval in intervals.items(): + if key not in self._data: + self._data[key] = SequentialPerfResult() + self._data[key].append(interval) + + @property + def utilization(self): + return ParallelPerfResult([self._data["user"], self._data["nice"], + self._data["system"], self._data["irq"], self._data["softirq"], + self._data["steal"]]) + +class StatCPUMeasurement(BaseCPUMeasurement): + def __init__(self, *args): + super(StatCPUMeasurement, self).__init__(*args) + self._running_measurements = [] + self._finished_measurements = [] + + def start(self): + jobs = [] + for host in self._conf: + jobs.append(host.run(CPUStatMonitor(interval=1000),bg=True)) + self._running_measurements = jobs + + def finish(self): + jobs = self._running_measurements + try: + for job in jobs: + job.kill(signal.SIGINT) + job.wait() + finally: + for job in jobs: + if not job.finished: + job.kill() + + self._running_measurements = [] + self._finished_measurements = jobs + + def collect_results(self): + results = [] + for job in self._finished_measurements: + job_results = self._process_job(job) + results.extend(job_results) + + return results + + def _process_job(self, job): + host = job.host + job_results = {} + for sample in job.result["data"]: + parsed_sample = self._parse_sample(sample) + + for cpu, cpu_intervals in parsed_sample.items(): + if cpu not in job_results: + job_results[cpu] = StatCPUMeasurementResults(host, cpu) + cpu_results = job_results[cpu] + cpu_results.update_intervals(cpu_intervals) + + return job_results.values() + + def _parse_sample(self, sample): + result = {} + duration = sample["duration"] + for key, value in sample.items(): + if key.startswith("cpu"): + result[key] = self._create_cpu_intervals(duration, value) + return result + + def _create_cpu_intervals(self, duration, cpu_intervals): + result = {} + for key, value in cpu_intervals.items(): + result[key] = PerfInterval(value, duration, "time units") + return result diff --git a/lnst/RecipeCommon/TRexMeasurementTool.py b/lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py similarity index 100% rename from lnst/RecipeCommon/TRexMeasurementTool.py rename to lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py diff --git a/lnst/RecipeCommon/Perf/Measurements/__init__.py b/lnst/RecipeCommon/Perf/Measurements/__init__.py new file mode 100644 index 0000000..781e641 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/__init__.py @@ -0,0 +1,3 @@ +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import Flow +from lnst.RecipeCommon.Perf.Measurements.IperfFlowMeasurement import IperfFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.StatCPUMeasurement import StatCPUMeasurement diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index a26d999..d7d1aec 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -1,4 +1,3 @@ - from lnst.Common.LnstError import LnstError from lnst.Common.Parameters import Param, IntParam, StrParam, BoolParam from lnst.Common.IpAddress import AF_INET, AF_INET6 @@ -8,7 +7,9 @@ from lnst.Controller.Recipe import BaseRecipe from lnst.RecipeCommon.Ping import PingTestAndEvaluate, PingConf from lnst.RecipeCommon.Perf.Recipe import Recipe as PerfRecipe from lnst.RecipeCommon.Perf.Recipe import RecipeConf as PerfRecipeConf -from lnst.RecipeCommon.IperfMeasurementTool import IperfMeasurementTool +from lnst.RecipeCommon.Perf.Measurements import Flow as PerfFlow +from lnst.RecipeCommon.Perf.Measurements import IperfFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements import StatCPUMeasurement
class EnrtConfiguration(object): def __init__(self): @@ -79,14 +80,16 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe):
perf_duration = IntParam(default=60) perf_iterations = IntParam(default=5) - perf_streams = IntParam(default=1) + perf_parallel_streams = IntParam(default=1) perf_msg_size = IntParam(default=123)
perf_usr_comment = StrParam(default="")
perf_max_deviation = IntParam(default=10) #TODO required?
- perf_tool = Param(default=IperfMeasurementTool) + net_perf_tool = Param(default=IperfFlowMeasurement) + + cpu_perf_tool = Param(default=StatCPUMeasurement)
def test(self): main_config = self.test_wide_configuration() @@ -188,8 +191,22 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe): server_bind = server_nic.ips_filter(family=family)[0]
for perf_test in self.params.perf_tests: + flow = PerfFlow( + type = perf_test, + generator = client_netns, + generator_bind = client_bind, + receiver = server_netns, + receiver_bind = server_bind, + msg_size = self.params.perf_msg_size, + duration = self.params.perf_duration, + parallel_streams = self.params.perf_parallel_streams) + + flow_measurement = self.params.net_perf_tool([flow]) yield PerfRecipeConf( - measurements=[ ], + measurements=[ + self.params.cpu_perf_tool([client_netns, server_netns]), + flow_measurement + ], iterations=self.params.perf_iterations)
def _pin_dev_interrupts(self, dev, cpu):
From: Ondrej Lichtner olichtne@redhat.com
No reason to use a short hand... the object will accept multiline descriptions anyway and the SummaryFormatter should be able to deal with that.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RecipeResults.py | 20 ++++++++++---------- lnst/Controller/RunSummaryFormatter.py | 2 +- 2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/lnst/Controller/RecipeResults.py b/lnst/Controller/RecipeResults.py index 05ce5fb..d19d6e8 100644 --- a/lnst/Controller/RecipeResults.py +++ b/lnst/Controller/RecipeResults.py @@ -38,7 +38,7 @@ class BaseResult(object): return self._success
@property - def short_desc(self): + def description(self): return "Short description of result if relevant"
@property @@ -76,8 +76,8 @@ class JobResult(BaseResult):
class JobStartResult(JobResult): """Generated automatically when a Job is succesfully started on a slave""" - @BaseResult.short_desc.getter - def short_desc(self): + @BaseResult.description.getter + def description(self): return "Job started: {}".format(str(self.job))
class JobFinishResult(JobResult): @@ -92,8 +92,8 @@ class JobFinishResult(JobResult): def success(self): return self._job.passed
- @BaseResult.short_desc.getter - def short_desc(self): + @BaseResult.description.getter + def description(self): return "Job finished: {}".format(str(self.job))
@BaseResult.data.getter @@ -105,11 +105,11 @@ class Result(BaseResult):
Will be created when the tester calls the Recipe interface for adding results.""" - def __init__(self, success, short_desc="", data=None, + def __init__(self, success, description="", data=None, level=None, data_level=None): super(Result, self).__init__(success)
- self._short_desc = short_desc + self._description = description self._data = data self._level = (level if isinstance(level, ResultLevel) @@ -118,9 +118,9 @@ class Result(BaseResult): if isinstance(data_level, ResultLevel) else ResultLevel.IMPORTANT+1)
- @BaseResult.short_desc.getter - def short_desc(self): - return self._short_desc + @BaseResult.description.getter + def description(self): + return self._description
@BaseResult.data.getter def data(self): diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index a90efe4..ea9a6dd 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -105,7 +105,7 @@ class RunSummaryFormatter(object): output_lines.append("{res} {src}\t{desc}".format( res = self._format_success(res.success), src = self._format_source(res), - desc = res.short_desc)) + desc = res.description)
if res.data_level <= self._level: output_lines.extend(self._format_data(res.data))
From: Ondrej Lichtner olichtne@redhat.com
If a result description is multiline it should be added below the header and the lines should be indented.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RunSummaryFormatter.py | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index ea9a6dd..670c1f7 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -11,6 +11,7 @@ __author__ = """ olichtne@redhat.com (Ondrej Lichtner) """
+from lnst.Common.Utils import indent from lnst.Common.Colours import decorate_with_preset from lnst.Controller.Common import ControllerError from lnst.Controller.MachineMapper import format_match_description @@ -102,10 +103,12 @@ class RunSummaryFormatter(object): except IndexError: pass
- output_lines.append("{res} {src}\t{desc}".format( + output_lines.append("{res} {src}{desc}".format( res = self._format_success(res.success), src = self._format_source(res), - desc = res.description) + desc = ("\t{}".format(res.description) + if res.description.count('\n') == 0 + else "\n{}".format(indent(res.description, 4)))))
if res.data_level <= self._level: output_lines.extend(self._format_data(res.data))
Tue, Oct 23, 2018 at 01:56:47PM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
The prepare_job method will create and return a lnst.Controller.Job object the same as the Namespace.run method but won't send the command to start it to the Slave. Instead the tester can call the Job.start method himself to send the start command later.
This could be used to achieve better grouping of time related job starts, currently it would only be useful if you intend to do resource intensive work between Namespace.run calls, but I can imagine extending this functionality to actually provide a more intelligent synchronized start of multiple jobs.
Consider this just an idea, might be removed later.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/Controller/Job.py | 9 +++++++++ lnst/Controller/Namespace.py | 5 +++++ 2 files changed, 14 insertions(+)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 89b8451..0e17934 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -146,6 +146,15 @@ class Job(object): else: return False
- def start(self, bg=False, timeout=DEFAULT_TIMEOUT):
self._netns._machine.run_job(self)
if not bg:
if not self.wait(timeout):
logging.debug("Killing timed-out job")
self.kill()
return self
- def wait(self, timeout=DEFAULT_TIMEOUT): """waits for the Job to finish for the specified amount of time
diff --git a/lnst/Controller/Namespace.py b/lnst/Controller/Namespace.py index af5bda1..b8bb2f7 100644 --- a/lnst/Controller/Namespace.py +++ b/lnst/Controller/Namespace.py @@ -80,6 +80,11 @@ class Namespace(object): returns a string name for any other namespace""" return self._name
- def prepare_job(self, what, fail=False, json=False, desc=None,
job_level=ResultLevel.DEBUG):
return Job(self, what, expect=not fail, json=json, desc=desc,
level=job_level)
- def run(self, what, bg=False, fail=False, timeout=DEFAULT_TIMEOUT, json=False, desc=None, job_level=ResultLevel.DEBUG): """
I am wondering why you did not reuse these new methods in Namespace.run() ?
This could cause duplication of fixes in prepare_job/start() and run() in the future.
Looking at the differences of run() and start() there's no exception handling in start().
J.
Tue, Oct 23, 2018 at 01:56:49PM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
This test module can be used to periodically sample the /proc/stat file for statistics and report back a list of differences between the individual samples as well as the raw data.
Can be used to calculate per-cpu and system wide cpu utilization.
Currently the test module samples until interrupted, so it should be run in the background and stopped with a job.kill(signal.SIGINT) call.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/Tests/CPUStatMonitor.py | 113 +++++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 lnst/Tests/CPUStatMonitor.py
diff --git a/lnst/Tests/CPUStatMonitor.py b/lnst/Tests/CPUStatMonitor.py new file mode 100644 index 0000000..9b4a104 --- /dev/null +++ b/lnst/Tests/CPUStatMonitor.py @@ -0,0 +1,113 @@ +import re +import time +import signal +from time import sleep +from lnst.Common.Parameters import IntParam +from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError, InterruptException
+def sigint_handler(signum, frame):
- raise InterruptException()
+class CPUStatMonitor(BaseTestModule):
- interval = IntParam(default=1000)
I guess this value is in miliseconds, correct? Some note about the value meaning would be good.
- def run(self):
self._res_data = {}
raw_samples = []
try:
old_handler = signal.signal(signal.SIGINT, sigint_handler)
If the exception is raised in signal.signal the old_handler will be undefined and 'finally' would attempt to use the undefined value to reset the signal handler. Not quite sure if signal() can raise an exception however (checked the python3 docs and it's only possible if threads are enabled). I just think about moving this before try-except block.
with open("/proc/stat") as stat:
while True:
stat.seek(0)
timestamp = time.time()
stat_lines = "".join(stat.readlines())
raw_samples.append({
"timestamp": timestamp,
"stat": stat_lines
})
sleep(self.params.interval / float(1000))
except InterruptException:
pass
finally:
signal.signal(signal.SIGINT, old_handler)
self._res_data["raw_data"] = raw_samples
self._res_data["data"] = self._process_samples(raw_samples)
return True
- def _process_samples(self, samples):
result = []
prev_sample = None
for sample in samples:
if prev_sample is not None:
parsed_prev = self._parse_stat_lines(prev_sample["stat"])
parsed_cur = self._parse_stat_lines(sample["stat"])
interval = self._subtract_nested_dicts(parsed_cur, parsed_prev)
interval["duration"] = (sample["timestamp"] -
prev_sample["timestamp"])
result.append(interval)
prev_sample = sample
return result
- def _subtract_nested_dicts(self, first, second):
result = {}
for key, val in first.items():
if isinstance(val, dict):
result[key] = self._subtract_nested_dicts(val, second[key])
else:
result[key] = val - second[key]
return result
- def _parse_stat_lines(self, stat):
result = {}
for line in stat.split("\n"):
cpu_data = self._parse_cpu_stats(line)
if cpu_data:
result[cpu_data[0]] = cpu_data[1]
continue
intr_data = self._parse_intr_stats(line)
if intr_data:
result[intr_data[0]] = intr_data[1]
continue
m = re.match(r"^(.*?) (\d+)$", line)
if m:
result[m.group(1)] = int(m.group(2))
return result
- def _parse_cpu_stats(self, stat_line):
result = {}
m = re.match(r"^(cpu\d*)\s+(\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+)$",
stat_line)
if m:
cpu = m.group(1)
result["user"] = int(m.group(2))
result["nice"] = int(m.group(3))
result["system"] = int(m.group(4))
result["idle"] = int(m.group(5))
result["iowait"] = int(m.group(6))
result["irq"] = int(m.group(7))
result["softirq"] = int(m.group(8))
result["steal"] = int(m.group(9))
result["guest"] = int(m.group(10))
result["guest_nice"] = int(m.group(11))
return cpu, result
else:
return None
- def _parse_intr_stats(self, stat_line):
result = {}
m = re.match(r"^(intr|softirq) (\d+) (.*)$", stat_line)
if m:
result["total"] = int(m.group(2))
for i, irq in enumerate(m.group(3).split(" ")):
result[i] = int(irq)
return m.group(1), result
else:
return None
-- 2.19.1 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/lnst-developers@lists.fedorahos...
On Tue, Oct 23, 2018 at 02:31:37PM +0200, Jan Tluka wrote:
Tue, Oct 23, 2018 at 01:56:47PM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
The prepare_job method will create and return a lnst.Controller.Job object the same as the Namespace.run method but won't send the command to start it to the Slave. Instead the tester can call the Job.start method himself to send the start command later.
This could be used to achieve better grouping of time related job starts, currently it would only be useful if you intend to do resource intensive work between Namespace.run calls, but I can imagine extending this functionality to actually provide a more intelligent synchronized start of multiple jobs.
Consider this just an idea, might be removed later.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/Controller/Job.py | 9 +++++++++ lnst/Controller/Namespace.py | 5 +++++ 2 files changed, 14 insertions(+)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 89b8451..0e17934 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -146,6 +146,15 @@ class Job(object): else: return False
- def start(self, bg=False, timeout=DEFAULT_TIMEOUT):
self._netns._machine.run_job(self)
if not bg:
if not self.wait(timeout):
logging.debug("Killing timed-out job")
self.kill()
return self
- def wait(self, timeout=DEFAULT_TIMEOUT): """waits for the Job to finish for the specified amount of time
diff --git a/lnst/Controller/Namespace.py b/lnst/Controller/Namespace.py index af5bda1..b8bb2f7 100644 --- a/lnst/Controller/Namespace.py +++ b/lnst/Controller/Namespace.py @@ -80,6 +80,11 @@ class Namespace(object): returns a string name for any other namespace""" return self._name
- def prepare_job(self, what, fail=False, json=False, desc=None,
job_level=ResultLevel.DEBUG):
return Job(self, what, expect=not fail, json=json, desc=desc,
level=job_level)
- def run(self, what, bg=False, fail=False, timeout=DEFAULT_TIMEOUT, json=False, desc=None, job_level=ResultLevel.DEBUG): """
I am wondering why you did not reuse these new methods in Namespace.run() ?
This could cause duplication of fixes in prepare_job/start() and run() in the future.
Looking at the differences of run() and start() there's no exception handling in start().
J.
Good catch on this one I'll reuse the shorter methods.
The exception handling in the run method isn't really doing anything so I'll remove it. It was originally there because I was basing the code on old lnst code but it doesn't look like it makes sense in the "next" version anymore. The logic of comparing to the expected result moved into the Job class itself where the "passed" attribute does the check.
-Ondrej
lnst-developers@lists.fedorahosted.org