From: Ondrej Lichtner olichtne@redhat.com
Hi all,
this is a second version of this patchset, v2 changes include: * the Namespace.run method reuses other methods instead of duplicating code * CPUStatMonitor comment explains the interval parameter * CPUStatMonitor bugfix for signal handling * fixed typo in IperfFlowMeasurement unit for cpu utilization * move and reimplementation of the TRex measurement class to fit into the whole redesign of the lnst.RecipeCommon.Perf package * updates to the OvSDPDKPVPRecipe: * stability improvements * refactoring to use the redesigned lnst.RecipeCommon.Perf package
Thanks, -Ondrej
Previous message:
Hi all,
the core of this patchset is refactorization of the PerfTestAndEvaluate recipe template into the lnst.RecipeCommon.Perf package implementing a template for generic performance measurement tests, moving the current MeasurementTools (Iperf and TRex) to fit this new model and adding CPUUtilization measurements.
All of this is then incorporated into the BaseEnrtRecipe which is currently the main user of the Perf recipe template.
There's also a couple of minor bug fixes and updates to the generic LNST API based on the experience of using them wrt. the main changes of this patchset.
!!!!!!!! Some of these feel more like Proposals at this point though and I'm not sure if they're good ideas. This is mostly related to the prepare_job method of the Namespace class. It should definitely be considered and thought about before fully accepting it. !!!!!!!!
Additional note: this patchset breaks the ENRT/OvS_DPDK_PvP.py due to the reorganization of the PerfRecipe. I wanted to send the patchset ASAP so that it can get some reviews. I'll work on updating the OvS_DPDK_PvP recipe while those reviews are coming in. And I won't merge this patchset without the additional fixes for Ovs_DPDK_PvP...
Thanks,
-Ondrej
Ondrej Lichtner (20): lnst.Common.Utils: change std_deviation calculation lnst.Tests.Iperf: set target bitrate to 0 lnst.Common.Parameters: add ListParam lnst.Tests.Iperf: fix parallel parameter lnst.Tests.Iperf: add runtime_estimate method lnst.Tests.Iperf: cleanup imports lnst.Controller.Job: change wait default timeout lnst.Controller.RecipeResults: add data_level attribute lnst.Controller.RunSummaryFormatter: fix header format lnst.Controller.Namespace: add prepare_job method for delayed start lnst.Controller.Job: expose the what attribute add lnst.Tests.CPUStatMonitor lnst.RecipeCommon.{Perf, PerfResult}: refactoring add lnst.RecipeCommon.Perf.Measurements package lnst.Controller.RecipeResults: rename desc to description lnst.Controller.RunSummaryFormatter: improve multiline result descriptions RecipeCommon.Perf.Measurements: move NetworkFlowTest lnst.Tests.TestPMD: add pmd_coremask parameter Recipes.ENRT.OvS_DPDK_PvP: stability improvements refactoring of OvS_DPDK_PvP recipe and related classes
lnst/Common/Parameters.py | 18 ++ lnst/Common/Utils.py | 8 +- lnst/Controller/Job.py | 21 +- lnst/Controller/Namespace.py | 32 +-- lnst/Controller/Recipe.py | 6 +- lnst/Controller/RecipeResults.py | 41 +++- lnst/Controller/RunSummaryFormatter.py | 10 +- lnst/RecipeCommon/IperfMeasurementTool.py | 83 ------- lnst/RecipeCommon/Perf.py | 120 ---------- .../Perf/Measurements/BaseCPUMeasurement.py | 109 +++++++++ .../Perf/Measurements/BaseFlowMeasurement.py | 220 ++++++++++++++++++ .../Perf/Measurements/BaseMeasurement.py | 29 +++ .../Perf/Measurements/IperfFlowMeasurement.py | 136 +++++++++++ .../Perf/Measurements/MeasurementError.py | 4 + .../Perf/Measurements/StatCPUMeasurement.py | 88 +++++++ .../Perf/Measurements/TRexFlowMeasurement.py | 151 ++++++++++++ .../Perf/Measurements/__init__.py | 4 + lnst/RecipeCommon/Perf/Recipe.py | 73 ++++++ .../{PerfResult.py => Perf/Results.py} | 65 +++--- lnst/RecipeCommon/Perf/__init__.py | 0 lnst/RecipeCommon/TRexMeasurementTool.py | 87 ------- lnst/Recipes/ENRT/BaseEnrtRecipe.py | 45 ++-- lnst/Recipes/ENRT/OvS_DPDK_PvP.py | 56 +++-- lnst/Tests/CPUStatMonitor.py | 116 +++++++++ lnst/Tests/Iperf.py | 14 +- lnst/Tests/TestPMD.py | 5 +- 26 files changed, 1124 insertions(+), 417 deletions(-) delete mode 100644 lnst/RecipeCommon/IperfMeasurementTool.py delete mode 100644 lnst/RecipeCommon/Perf.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/MeasurementError.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/TRexFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/__init__.py create mode 100644 lnst/RecipeCommon/Perf/Recipe.py rename lnst/RecipeCommon/{PerfResult.py => Perf/Results.py} (72%) create mode 100644 lnst/RecipeCommon/Perf/__init__.py delete mode 100644 lnst/RecipeCommon/TRexMeasurementTool.py create mode 100644 lnst/Tests/CPUStatMonitor.py
From: Ondrej Lichtner olichtne@redhat.com
The old algorithm works and has the advantage of a single pass throught the value array, however in case of identical small values (less than 1) it might encounter an error of calculating the square root of a negative number instead of properly returning 0. Example: [0.031, 0.031, 0.031, 0.031, 0.031]
The new algorithm is less efficient (uses 2 iterations of the value array) but shouldn't have the same issue.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Utils.py | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/lnst/Common/Utils.py b/lnst/Common/Utils.py index 0a903be..f158ff2 100644 --- a/lnst/Common/Utils.py +++ b/lnst/Common/Utils.py @@ -271,12 +271,8 @@ def dict_to_dot(original_dict, prefix=""): def std_deviation(values): if len(values) <= 0: return 0.0 - s1 = 0.0 - s2 = 0.0 - for val in values: - s1 += val - s2 += val**2 - return (math.sqrt(len(values)*s2 - s1**2))/len(values) + avg = sum(values) / float(len(values)) + return math.sqrt(sum([(float(i) - avg)**2 for i in values])/len(values))
def deprecated(func): """
From: Ondrej Lichtner olichtne@redhat.com
This is important for UDP tests where the target bitrate default is 1mbit/s.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index 10bf974..3b5777d 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -136,7 +136,7 @@ class IperfClient(IperfBase): else: test = ""
- cmd = ("iperf3 -c {server} -J -t {duration}" + cmd = ("iperf3 -c {server} -b 0 -J -t {duration}" " {cpu} {test} {mss} {blksize} {parallel}" " {opts}".format( server=self.params.server, duration=self.params.duration,
From: Ondrej Lichtner olichtne@redhat.com
ListParam accepts list objects (not other iterables, such as a tuple or string), and accepts an optional type parameter, if this parameter is provided it will be used to type check each individual item in the list. This can be useful for occasions where you want a parameter that is a list of Integers or Strings.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Parameters.py | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/lnst/Common/Parameters.py b/lnst/Common/Parameters.py index 00c7832..b139d6c 100644 --- a/lnst/Common/Parameters.py +++ b/lnst/Common/Parameters.py @@ -120,6 +120,24 @@ class DictParam(Param): else: return value
+class ListParam(Param): + def __init__(self, type=None, **kwargs): + self._type = type + super(ListParam, self).__init__(**kwargs) + + def type_check(self, value): + if not isinstance(value, list): + raise ParamError("Value must be a List. Not {}".format(type(value))) + + if self._type is not None: + for item in value: + try: + self._type.type_check(item) + except ParamError as e: + raise ParamError("Value {} failed type check:\n{}" + .format(str(e))) + return value + class Parameters(object): def __init__(self): self._attrs = {}
From: Ondrej Lichtner olichtne@redhat.com
Should have an empty default value if the parameter wasn't set to anything.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index 3b5777d..d673164 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -128,6 +128,8 @@ class IperfClient(IperfBase):
if "parallel" in self.params: parallel = "-P {:d}".format(self.params.parallel) + else: + parallel = ""
if self.params.udp: test = "--udp"
From: Ondrej Lichtner olichtne@redhat.com
Returns the estimated time required to complete the run method. Currently the estimate is just the test duration + 5 seconds which is a "safe" estimated overhead required for everything to start correctly.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index d673164..89f05a8 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -105,6 +105,10 @@ class IperfClient(IperfBase): if self.params.udp and self.params.sctp: raise TestModuleError("Parameters udp and sctp are mutually exclusive!")
+ def runtime_estimate(self): + _duration_overhead = 5 + return (self.params.duration + _duration_overhead) + def _compose_cmd(self): port = ""
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index 89f05a8..970d994 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -1,11 +1,7 @@ import logging -import errno -import re -import signal -import time import subprocess import json -from lnst.Common.Parameters import IntParam, IpParam, StrParam, Param, BoolParam +from lnst.Common.Parameters import IntParam, IpParam, StrParam, BoolParam from lnst.Common.Parameters import HostnameOrIpParam from lnst.Common.Utils import is_installed from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError
From: Ondrej Lichtner olichtne@redhat.com
Instead of waiting forever by default, we should wait for the DEFAULT_TIMEOUT amount, and offer the option to wait for forever when explicitly chosen. If something is broken, freezing forever due to unlimited wait is usually not what the test developer intended/expected.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index f1feae6..89b8451 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -14,6 +14,7 @@ olichtne@redhat.com (Ondrej Lichtner) import logging import signal from lnst.Common.JobError import JobError +from lnst.Common.NetTestCommand import DEFAULT_TIMEOUT from lnst.Tests.BaseTestModule import BaseTestModule from lnst.Controller.RecipeResults import ResultLevel
@@ -145,13 +146,14 @@ class Job(object): else: return False
- def wait(self, timeout=0): + def wait(self, timeout=DEFAULT_TIMEOUT): """waits for the Job to finish for the specified amount of time
Args: timeout -- integer value indicating how long to wait for. - Default is 0, means wait forever. Don't use for infinitelly - running Jobs. + Default is DEFAULT_TIMEOUT. + Use zero to wait forever. Don't use for infinitelly running + jobs... If non-zero LNST uses a timed SIGALARM signal to return from this method. Returns:
From: Ondrej Lichtner olichtne@redhat.com
The level attribute specifies the importance of the Result object that is used either for filtering or formatting purposes when processing the recipe results.
The data_level attribute is an extension of that by specifying the importance of the data provided with the result. It is used by the RunSummaryFormatter to filter out the data provided with the result.
The default for the Base class is ResultLevel.DEBUG same as the level attribute.
For the JobResult class it's always level+1, for ease of use when formatting results: * choose filter level -> show results * choose filter level + 1 -> show results and their data
For the Result class used by user, the default is ResultLevel.IMPORTANT+1 (so level+1 same as the JobResult), but the user has the ability to change this when calling Recipe.add_result
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Recipe.py | 6 ++++-- lnst/Controller/RecipeResults.py | 21 +++++++++++++++++++-- lnst/Controller/RunSummaryFormatter.py | 3 ++- 3 files changed, 25 insertions(+), 5 deletions(-)
diff --git a/lnst/Controller/Recipe.py b/lnst/Controller/Recipe.py index 080dd46..5a0a347 100644 --- a/lnst/Controller/Recipe.py +++ b/lnst/Controller/Recipe.py @@ -139,8 +139,10 @@ class BaseRecipe(object): else: return None
- def add_result(self, success, description="", data=None): - self.current_run.add_result(Result(success, description, data)) + def add_result(self, success, description="", data=None, + level=None, data_level=None): + self.current_run.add_result(Result(success, description, data, + level, data_level))
class RecipeRun(object): def __init__(self, match, desc=None): diff --git a/lnst/Controller/RecipeResults.py b/lnst/Controller/RecipeResults.py index 6b42a83..05ce5fb 100644 --- a/lnst/Controller/RecipeResults.py +++ b/lnst/Controller/RecipeResults.py @@ -49,6 +49,10 @@ class BaseResult(object): def level(self): return ResultLevel.DEBUG
+ @property + def data_level(self): + return ResultLevel.DEBUG + class JobResult(BaseResult): """Base class for storing result data of Jobs
@@ -66,6 +70,10 @@ class JobResult(BaseResult): def level(self): return self.job.level
+ @BaseResult.data_level.getter + def data_level(self): + return self.job.level+1 + class JobStartResult(JobResult): """Generated automatically when a Job is succesfully started on a slave""" @BaseResult.short_desc.getter @@ -98,12 +106,17 @@ class Result(BaseResult): Will be created when the tester calls the Recipe interface for adding results.""" def __init__(self, success, short_desc="", data=None, - level=ResultLevel.IMPORTANT): + level=None, data_level=None): super(Result, self).__init__(success)
self._short_desc = short_desc self._data = data - self._level = level + self._level = (level + if isinstance(level, ResultLevel) + else ResultLevel.IMPORTANT) + self._data_level = (data_level + if isinstance(data_level, ResultLevel) + else ResultLevel.IMPORTANT+1)
@BaseResult.short_desc.getter def short_desc(self): @@ -116,3 +129,7 @@ class Result(BaseResult): @BaseResult.level.getter def level(self): return self._level + + @BaseResult.data_level.getter + def data_level(self): + return self._data_level diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index 888b897..a5d505e 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -107,7 +107,8 @@ class RunSummaryFormatter(object): src = self._format_source(res), desc = res.short_desc))
- output_lines.extend(self._format_data(res.data)) + if res.data_level <= self._level: + output_lines.extend(self._format_data(res.data))
output_lines.append("Overall result of this Run: {}". format(self._format_success(overall_result)))
From: Ondrej Lichtner olichtne@redhat.com
Removing the tab between the result success and the result source to achieve a nicer spacing.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RunSummaryFormatter.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index a5d505e..a90efe4 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -102,7 +102,7 @@ class RunSummaryFormatter(object): except IndexError: pass
- output_lines.append("{res}\t{src}\t{desc}".format( + output_lines.append("{res} {src}\t{desc}".format( res = self._format_success(res.success), src = self._format_source(res), desc = res.short_desc))
From: Ondrej Lichtner olichtne@redhat.com
The prepare_job method will create and return a lnst.Controller.Job object the same as the Namespace.run method but won't send the command to start it to the Slave. Instead the tester can call the Job.start method himself to send the start command later.
This could be used to achieve better grouping of time related job starts, currently it would only be useful if you intend to do resource intensive work between Namespace.run calls, but I can imagine extending this functionality to actually provide a more intelligent synchronized start of multiple jobs.
Consider this just an idea, might be removed later.
v2: * the Namespace.run method is now just a shortcut for prepare_job + job.start. The exception handling isn't necessary and is removed in this version
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 9 +++++++++ lnst/Controller/Namespace.py | 32 +++++++++----------------------- 2 files changed, 18 insertions(+), 23 deletions(-)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 89b8451..0e17934 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -146,6 +146,15 @@ class Job(object): else: return False
+ def start(self, bg=False, timeout=DEFAULT_TIMEOUT): + self._netns._machine.run_job(self) + + if not bg: + if not self.wait(timeout): + logging.debug("Killing timed-out job") + self.kill() + return self + def wait(self, timeout=DEFAULT_TIMEOUT): """waits for the Job to finish for the specified amount of time
diff --git a/lnst/Controller/Namespace.py b/lnst/Controller/Namespace.py index af5bda1..c50152d 100644 --- a/lnst/Controller/Namespace.py +++ b/lnst/Controller/Namespace.py @@ -80,8 +80,13 @@ class Namespace(object): returns a string name for any other namespace""" return self._name
- def run(self, what, bg=False, fail=False, timeout=DEFAULT_TIMEOUT, - json=False, desc=None, job_level=ResultLevel.DEBUG): + def prepare_job(self, what, fail=False, json=False, desc=None, + job_level=ResultLevel.DEBUG): + return Job(self, what, expect=not fail, json=json, desc=desc, + level=job_level) + + def run(self, what, fail=False, json=False, desc=None, + job_level=ResultLevel.DEBUG, bg=False, timeout=DEFAULT_TIMEOUT): """ Args: what (mandatory) -- what should be run on the host. Can be either a @@ -105,27 +110,8 @@ class Namespace(object): running Job remotely and when the result data arrives from the Slave the Job object will be automatically updated. """ - - job = Job(self, what, expect=not fail, json=json, desc=desc, - level=job_level) - - try: - self._machine.run_job(job) - - if not bg: - if not job.wait(timeout): - logging.debug("Killing timed-out job") - job.kill() - except: - raise - finally: - pass - #TODO check expect result here - # if bg=True: - # add "job started" result - # else: - # add job result - + job = self.prepare_job(what, fail, json, desc, job_level) + job.start(bg, timeout) return job
def __getattr__(self, name):
From: Ondrej Lichtner olichtne@redhat.com
This should be mostly useful for accessing Test module object instances when the job isn't started yet, e.g. changing the parameters before starting a prepared job. Or to figure out the estimated runtime before calling job.wait.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 0e17934..d3786be 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -65,6 +65,10 @@ class Job(object): raise Exception("Id already set") self._id = val
+ @property + def what(self): + return self._what + @property def host(self): """the initial namespace of the host the job is running on"""
From: Ondrej Lichtner olichtne@redhat.com
This test module can be used to periodically sample the /proc/stat file for statistics and report back a list of differences between the individual samples as well as the raw data.
Can be used to calculate per-cpu and system wide cpu utilization.
Currently the test module samples until interrupted, so it should be run in the background and stopped with a job.kill(signal.SIGINT) call.
v2: * added a comment explaining the interval parameter * added a default value for the old_handler variable. In case the signal.signal call fails this will be used to avoid a NameError exception in the finally block
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/CPUStatMonitor.py | 116 +++++++++++++++++++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 lnst/Tests/CPUStatMonitor.py
diff --git a/lnst/Tests/CPUStatMonitor.py b/lnst/Tests/CPUStatMonitor.py new file mode 100644 index 0000000..aabec06 --- /dev/null +++ b/lnst/Tests/CPUStatMonitor.py @@ -0,0 +1,116 @@ +import re +import time +import signal +from time import sleep +from lnst.Common.Parameters import IntParam +from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError, InterruptException + +def sigint_handler(signum, frame): + raise InterruptException() + +class CPUStatMonitor(BaseTestModule): + #number of miliseconds to sleep between each sample + interval = IntParam(default=1000) + + def run(self): + self._res_data = {} + + raw_samples = [] + old_handler = None + try: + old_handler = signal.signal(signal.SIGINT, sigint_handler) + with open("/proc/stat") as stat: + while True: + stat.seek(0) + timestamp = time.time() + stat_lines = "".join(stat.readlines()) + raw_samples.append({ + "timestamp": timestamp, + "stat": stat_lines + }) + sleep(self.params.interval / float(1000)) + except InterruptException: + pass + finally: + if old_handler is not None: + signal.signal(signal.SIGINT, old_handler) + + self._res_data["raw_data"] = raw_samples + self._res_data["data"] = self._process_samples(raw_samples) + + return True + + def _process_samples(self, samples): + result = [] + prev_sample = None + for sample in samples: + if prev_sample is not None: + parsed_prev = self._parse_stat_lines(prev_sample["stat"]) + parsed_cur = self._parse_stat_lines(sample["stat"]) + + interval = self._subtract_nested_dicts(parsed_cur, parsed_prev) + interval["duration"] = (sample["timestamp"] - + prev_sample["timestamp"]) + + result.append(interval) + + prev_sample = sample + return result + + def _subtract_nested_dicts(self, first, second): + result = {} + for key, val in first.items(): + if isinstance(val, dict): + result[key] = self._subtract_nested_dicts(val, second[key]) + else: + result[key] = val - second[key] + return result + + def _parse_stat_lines(self, stat): + result = {} + for line in stat.split("\n"): + cpu_data = self._parse_cpu_stats(line) + if cpu_data: + result[cpu_data[0]] = cpu_data[1] + continue + + intr_data = self._parse_intr_stats(line) + if intr_data: + result[intr_data[0]] = intr_data[1] + continue + + m = re.match(r"^(.*?) (\d+)$", line) + if m: + result[m.group(1)] = int(m.group(2)) + return result + + def _parse_cpu_stats(self, stat_line): + result = {} + m = re.match(r"^(cpu\d*)\s+(\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+)$", + stat_line) + if m: + cpu = m.group(1) + result["user"] = int(m.group(2)) + result["nice"] = int(m.group(3)) + result["system"] = int(m.group(4)) + result["idle"] = int(m.group(5)) + result["iowait"] = int(m.group(6)) + result["irq"] = int(m.group(7)) + result["softirq"] = int(m.group(8)) + result["steal"] = int(m.group(9)) + result["guest"] = int(m.group(10)) + result["guest_nice"] = int(m.group(11)) + return cpu, result + else: + return None + + def _parse_intr_stats(self, stat_line): + result = {} + m = re.match(r"^(intr|softirq) (\d+) (.*)$", stat_line) + if m: + result["total"] = int(m.group(2)) + for i, irq in enumerate(m.group(3).split(" ")): + result[i] = int(irq) + return m.group(1), result + else: + return None
Wed, Nov 14, 2018 at 04:04:45PM CET, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
This test module can be used to periodically sample the /proc/stat file for statistics and report back a list of differences between the individual samples as well as the raw data.
Can be used to calculate per-cpu and system wide cpu utilization.
Currently the test module samples until interrupted, so it should be run in the background and stopped with a job.kill(signal.SIGINT) call.
v2:
- added a comment explaining the interval parameter
- added a default value for the old_handler variable. In case the signal.signal call fails this will be used to avoid a NameError exception in the finally block
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/Tests/CPUStatMonitor.py | 116 +++++++++++++++++++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 lnst/Tests/CPUStatMonitor.py
diff --git a/lnst/Tests/CPUStatMonitor.py b/lnst/Tests/CPUStatMonitor.py new file mode 100644 index 0000000..aabec06 --- /dev/null +++ b/lnst/Tests/CPUStatMonitor.py @@ -0,0 +1,116 @@ +import re +import time +import signal +from time import sleep +from lnst.Common.Parameters import IntParam +from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError, InterruptException
+def sigint_handler(signum, frame):
- raise InterruptException()
+class CPUStatMonitor(BaseTestModule):
- #number of miliseconds to sleep between each sample
- interval = IntParam(default=1000)
- def run(self):
self._res_data = {}
raw_samples = []
old_handler = None
try:
old_handler = signal.signal(signal.SIGINT, sigint_handler)
with open("/proc/stat") as stat:
while True:
stat.seek(0)
timestamp = time.time()
stat_lines = "".join(stat.readlines())
raw_samples.append({
"timestamp": timestamp,
"stat": stat_lines
})
sleep(self.params.interval / float(1000))
except InterruptException:
pass
finally:
if old_handler is not None:
signal.signal(signal.SIGINT, old_handler)
self._res_data["raw_data"] = raw_samples
self._res_data["data"] = self._process_samples(raw_samples)
return True
- def _process_samples(self, samples):
result = []
prev_sample = None
How about this?
prev_sample = samples[0] for sample in samples[1:]: ...
With this you can remove the 'if prev_sample is not None:' condition.
for sample in samples:
if prev_sample is not None:
parsed_prev = self._parse_stat_lines(prev_sample["stat"])
parsed_cur = self._parse_stat_lines(sample["stat"])
interval = self._subtract_nested_dicts(parsed_cur, parsed_prev)
interval["duration"] = (sample["timestamp"] -
prev_sample["timestamp"])
result.append(interval)
prev_sample = sample
return result
- def _subtract_nested_dicts(self, first, second):
result = {}
for key, val in first.items():
if isinstance(val, dict):
result[key] = self._subtract_nested_dicts(val, second[key])
else:
result[key] = val - second[key]
return result
- def _parse_stat_lines(self, stat):
result = {}
for line in stat.split("\n"):
cpu_data = self._parse_cpu_stats(line)
if cpu_data:
result[cpu_data[0]] = cpu_data[1]
continue
intr_data = self._parse_intr_stats(line)
if intr_data:
result[intr_data[0]] = intr_data[1]
continue
m = re.match(r"^(.*?) (\d+)$", line)
if m:
result[m.group(1)] = int(m.group(2))
return result
- def _parse_cpu_stats(self, stat_line):
result = {}
m = re.match(r"^(cpu\d*)\s+(\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+)$",
stat_line)
if m:
cpu = m.group(1)
result["user"] = int(m.group(2))
result["nice"] = int(m.group(3))
result["system"] = int(m.group(4))
result["idle"] = int(m.group(5))
result["iowait"] = int(m.group(6))
result["irq"] = int(m.group(7))
result["softirq"] = int(m.group(8))
result["steal"] = int(m.group(9))
result["guest"] = int(m.group(10))
result["guest_nice"] = int(m.group(11))
return cpu, result
else:
return None
- def _parse_intr_stats(self, stat_line):
result = {}
m = re.match(r"^(intr|softirq) (\d+) (.*)$", stat_line)
if m:
result["total"] = int(m.group(2))
for i, irq in enumerate(m.group(3).split(" ")):
result[i] = int(irq)
return m.group(1), result
else:
return None
-- 2.19.1 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/lnst-developers@lists.fedorahos...
From: Ondrej Lichtner olichtne@redhat.com
Refactoring the Perf and PerfResult modules into a separate package lnst.RecipeCommon.Perf that will host everything related to the Perf recipe template.
I'm also considering later moving this into the lnst.Recipes package where it might make more sense as an actual recipe, with an example test method that will show off the basic usage of the template.
Changes summary: * moved lnst/RecipeCommon/Perf.py to lnst/RecipeCommon/Perf/Recipe.py * renamed PerfTestAndEvaluate class to just Recipe since the "Perf" part is obvious from the namespace * PerfConf class renamed to RecipeConf * RecipeConf only contains configuration for the Recipe - the list of measurements to do and the number of repeats for these * PerfMeasurementTool removed, this will be replaced by the Measurements class hierarchy added in the following commit * added RecipeResults class to store aggregated measurement results associated with the current Recipe configuration
* moved lnst/RecipeCommon/PerfResults.py to lnst.RecipeCommon/Perf/Results.py * removed StreamPerf, MultiStreamPerf, MultiRunPerf and replaced them with SequentialPerfResult and ParallelPerfResult to improve code reuse * added the PerfResult base class * set PerfInterval string formatting precision to 2 decimals * improved code reuse for item validation in PerfList class
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/Perf.py | 120 ------------------ lnst/RecipeCommon/Perf/Recipe.py | 73 +++++++++++ .../{PerfResult.py => Perf/Results.py} | 65 ++++------ lnst/RecipeCommon/Perf/__init__.py | 0 lnst/Recipes/ENRT/BaseEnrtRecipe.py | 20 +-- 5 files changed, 106 insertions(+), 172 deletions(-) delete mode 100644 lnst/RecipeCommon/Perf.py create mode 100644 lnst/RecipeCommon/Perf/Recipe.py rename lnst/RecipeCommon/{PerfResult.py => Perf/Results.py} (72%) create mode 100644 lnst/RecipeCommon/Perf/__init__.py
diff --git a/lnst/RecipeCommon/Perf.py b/lnst/RecipeCommon/Perf.py deleted file mode 100644 index 97aa0f1..0000000 --- a/lnst/RecipeCommon/Perf.py +++ /dev/null @@ -1,120 +0,0 @@ -from lnst.Controller.Recipe import BaseRecipe -from lnst.RecipeCommon.PerfResult import MultiRunPerf - -class PerfConf(object): - def __init__(self, - perf_tool, - test_type, - generator, generator_bind, - receiver, receiver_bind, - msg_size, duration, iterations, streams): - self._perf_tool = perf_tool - self._test_type = test_type - - self._generator = generator - self._generator_bind = generator_bind - self._receiver = receiver - self._receiver_bind = receiver_bind - - self._msg_size = msg_size - self._duration = duration - self._iterations = iterations - self._streams = streams - - @property - def perf_tool(self): - return self._perf_tool - - @property - def generator(self): - return self._generator - - @property - def generator_bind(self): - return self._generator_bind - - @property - def receiver(self): - return self._receiver - - @property - def receiver_bind(self): - return self._receiver_bind - - @property - def test_type(self): - return self._test_type - - @property - def msg_size(self): - return self._msg_size - - @property - def duration(self): - return self._duration - - @property - def iterations(self): - return self._iterations - - @property - def streams(self): - return self._streams - -class PerfMeasurementTool(object): - @staticmethod - def perf_measure(perf_conf): - raise NotImplementedError - -class PerfTestAndEvaluate(BaseRecipe): - def perf_test(self, perf_conf): - generator_measurements = MultiRunPerf() - receiver_measurements = MultiRunPerf() - for i in range(perf_conf.iterations): - tx, rx = perf_conf.perf_tool.perf_measure(perf_conf) - - if tx: - generator_measurements.append(tx) - if rx: - receiver_measurements.append(rx) - - return generator_measurements, receiver_measurements - - def perf_evaluate_and_report(self, perf_conf, results, baseline): - self.perf_evaluate(perf_conf, results, baseline) - - self.perf_report(perf_conf, results, baseline) - - def perf_evaluate(self, perf_conf, results, baseline): - generator, receiver = results - - if generator.average > 0: - self.add_result(True, "Generator reported non-zero throughput") - else: - self.add_result(False, "Generator reported zero throughput") - - if receiver.average > 0: - self.add_result(True, "Receiver reported non-zero throughput") - else: - self.add_result(False, "Receiver reported zero throughput") - - - def perf_report(self, perf_conf, results, baseline): - generator, receiver = results - - self.add_result( - True, - "Generator measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second" - .format(tput=generator.average, - deviation=generator.std_deviation, - percentage=(generator.std_deviation/generator.average) * 100, - unit=generator.unit), - data = generator) - self.add_result( - True, - "Receiver measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second" - .format(tput=receiver.average, - deviation=receiver.std_deviation, - percentage=(receiver.std_deviation/receiver.average) * 100, - unit=receiver.unit), - data = receiver) diff --git a/lnst/RecipeCommon/Perf/Recipe.py b/lnst/RecipeCommon/Perf/Recipe.py new file mode 100644 index 0000000..e305310 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Recipe.py @@ -0,0 +1,73 @@ +from lnst.Controller.Recipe import BaseRecipe +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult + +class RecipeConf(object): + def __init__(self, measurements, iterations): + self._measurements = measurements + self._iterations = iterations + + @property + def measurements(self): + return self._measurements + + @property + def iterations(self): + return self._iterations + +class RecipeResults(object): + def __init__(self, perf_conf): + self._perf_conf = perf_conf + self._results = {} + + @property + def perf_conf(self): + return self._perf_conf + + @property + def results(self): + return self._results + + def add_measurement_results(self, measurement, new_results): + aggregated_results = self._results.get(measurement, None) + aggregated_results = measurement.aggregate_results( + aggregated_results, new_results) + self._results[measurement] = aggregated_results + +class Recipe(BaseRecipe): + def perf_test(self, recipe_conf): + results = RecipeResults(recipe_conf) + + for i in range(recipe_conf.iterations): + run_results = [] + for measurement in recipe_conf.measurements: + measurement.start() + for measurement in reversed(recipe_conf.measurements): + measurement.finish() + for measurement in recipe_conf.measurements: + measurement_results = measurement.collect_results() + results.add_measurement_results( + measurement, measurement_results) + + return results + + def perf_report_and_evaluate(self, results): + self.perf_report(results) + + self.perf_evaluate(results) + + def perf_report(self, recipe_results): + if not recipe_results: + self.add_result(False, "No results available to report.") + return + + for measurement, results in recipe_results.results.items(): + measurement.report_results(self, results) + + def perf_evaluate(self, recipe_results): + if not recipe_results: + self.add_result(False, "No results available to evaluate.") + return + + for measurement, results in recipe_results.results.items(): + measurement.evaluate_results(self, results) diff --git a/lnst/RecipeCommon/PerfResult.py b/lnst/RecipeCommon/Perf/Results.py similarity index 72% rename from lnst/RecipeCommon/PerfResult.py rename to lnst/RecipeCommon/Perf/Results.py index f48fd0a..4591447 100644 --- a/lnst/RecipeCommon/PerfResult.py +++ b/lnst/RecipeCommon/Perf/Results.py @@ -10,7 +10,20 @@ class PerfStatMixin(object): def std_deviation(self): return std_deviation([i.average for i in self])
-class PerfInterval(PerfStatMixin): +class PerfResult(PerfStatMixin): + @property + def value(self): + raise NotImplementedError() + + @property + def duration(self): + raise NotImplementedError() + + @property + def unit(self): + raise NotImplementedError() + +class PerfInterval(PerfResult): def __init__(self, value, duration, unit): self._value = value self._duration = duration @@ -33,20 +46,13 @@ class PerfInterval(PerfStatMixin): return 0
def __str__(self): - return "{} {} in {} seconds".format( - self.value, self.unit, self.duration) + return "{:.2f} {} in {:.2f} seconds".format( + float(self.value), self.unit, float(self.duration))
class PerfList(list): - _sub_type = None - def __init__(self, iterable=[]): - unit = None - for i, item in enumerate(iterable): - if not isinstance(item, self._sub_type): - raise LnstError("{} only accepts {} objects." - .format(self.__class__.__name__, - self._sub_type.__name__)) + self._validate_item_type(item)
if i == 0: unit = item.unit @@ -57,14 +63,17 @@ class PerfList(list): super(PerfList, self).__init__(iterable)
def _validate_item(self, item): - if not isinstance(item, self._sub_type): - raise LnstError("{} only accepts {} objects." - .format(self.__class__.__name__, - self._sub_type.__name__)) + self._validate_item_type(item)
if len(self) > 0 and item.unit != self[0].unit: raise LnstError("PerfList items must have the same unit.")
+ def _validate_item_type(self, item): + if (not isinstance(item, PerfInterval) and + not isinstance(item, PerfList)): + raise LnstError("{} only accepts PerfInterval or PerfList objects." + .format(self.__class__.__name__)) + def append(self, item): self._validate_item(item)
@@ -104,9 +113,7 @@ class PerfList(list):
super(PerfList, self).__setslice__(i, j, iterable)
-class StreamPerf(PerfList, PerfStatMixin): - _sub_type = PerfInterval - +class SequentialPerfResult(PerfResult, PerfList): @property def value(self): return sum([i.value for i in self]) @@ -122,9 +129,7 @@ class StreamPerf(PerfList, PerfStatMixin): else: return None
-class MultiStreamPerf(PerfList, PerfStatMixin): - _sub_type = StreamPerf - +class ParallelPerfResult(PerfResult, PerfList): @property def value(self): return sum([i.value for i in self]) @@ -139,21 +144,3 @@ class MultiStreamPerf(PerfList, PerfStatMixin): return self[0].unit else: return None - -class MultiRunPerf(PerfList, PerfStatMixin): - _sub_type = MultiStreamPerf - - @property - def value(self): - return sum([i.value for i in self]) - - @property - def duration(self): - return sum([i.duration for i in self]) - - @property - def unit(self): - if len(self) > 0: - return self[0].unit - else: - return None diff --git a/lnst/RecipeCommon/Perf/__init__.py b/lnst/RecipeCommon/Perf/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index 9e2b674..a26d999 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -6,7 +6,8 @@ from lnst.Common.IpAddress import AF_INET, AF_INET6 from lnst.Controller.Recipe import BaseRecipe
from lnst.RecipeCommon.Ping import PingTestAndEvaluate, PingConf -from lnst.RecipeCommon.Perf import PerfTestAndEvaluate, PerfConf +from lnst.RecipeCommon.Perf.Recipe import Recipe as PerfRecipe +from lnst.RecipeCommon.Perf.Recipe import RecipeConf as PerfRecipeConf from lnst.RecipeCommon.IperfMeasurementTool import IperfMeasurementTool
class EnrtConfiguration(object): @@ -61,7 +62,7 @@ class EnrtSubConfiguration(object): def offload_settings(self, value): self._offload_settings = value
-class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): +class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe): ip_versions = Param(default=("ipv4", "ipv6")) perf_tests = Param(default=("tcp_stream", "udp_stream", "sctp_stream"))
@@ -101,7 +102,7 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): for perf_config in self.generate_perf_configurations(main_config, sub_config): result = self.perf_test(perf_config) - self.perf_evaluate_and_report(perf_config, result, baseline=None) + self.perf_report_and_evaluate(result)
self.remove_sub_configuration(main_config, sub_config)
@@ -187,16 +188,9 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): server_bind = server_nic.ips_filter(family=family)[0]
for perf_test in self.params.perf_tests: - yield PerfConf(perf_tool = self.params.perf_tool, - test_type = perf_test, - generator = client_netns, - generator_bind = client_bind, - receiver = server_netns, - receiver_bind = server_bind, - msg_size = self.params.perf_msg_size, - duration = self.params.perf_duration, - iterations = self.params.perf_iterations, - streams = self.params.perf_streams) + yield PerfRecipeConf( + measurements=[ ], + iterations=self.params.perf_iterations)
def _pin_dev_interrupts(self, dev, cpu): netns = dev.netns
Wed, Nov 14, 2018 at 04:04:46PM CET, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
Refactoring the Perf and PerfResult modules into a separate package lnst.RecipeCommon.Perf that will host everything related to the Perf recipe template.
I'm also considering later moving this into the lnst.Recipes package where it might make more sense as an actual recipe, with an example test method that will show off the basic usage of the template.
Changes summary:
- moved lnst/RecipeCommon/Perf.py to lnst/RecipeCommon/Perf/Recipe.py
- renamed PerfTestAndEvaluate class to just Recipe since the "Perf" part
is obvious from the namespace
- PerfConf class renamed to RecipeConf
- RecipeConf only contains configuration for the Recipe - the
list of measurements to do and the number of repeats for these
- PerfMeasurementTool removed, this will be replaced by the Measurements
class hierarchy added in the following commit
- added RecipeResults class to store aggregated measurement results
associated with the current Recipe configuration
- moved lnst/RecipeCommon/PerfResults.py to lnst.RecipeCommon/Perf/Results.py
- removed StreamPerf, MultiStreamPerf, MultiRunPerf and replaced them
with SequentialPerfResult and ParallelPerfResult to improve code reuse
- added the PerfResult base class
- set PerfInterval string formatting precision to 2 decimals
- improved code reuse for item validation in PerfList class
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/RecipeCommon/Perf.py | 120 ------------------ lnst/RecipeCommon/Perf/Recipe.py | 73 +++++++++++ .../{PerfResult.py => Perf/Results.py} | 65 ++++------ lnst/RecipeCommon/Perf/__init__.py | 0 lnst/Recipes/ENRT/BaseEnrtRecipe.py | 20 +-- 5 files changed, 106 insertions(+), 172 deletions(-) delete mode 100644 lnst/RecipeCommon/Perf.py create mode 100644 lnst/RecipeCommon/Perf/Recipe.py rename lnst/RecipeCommon/{PerfResult.py => Perf/Results.py} (72%) create mode 100644 lnst/RecipeCommon/Perf/__init__.py
diff --git a/lnst/RecipeCommon/Perf.py b/lnst/RecipeCommon/Perf.py deleted file mode 100644 index 97aa0f1..0000000 --- a/lnst/RecipeCommon/Perf.py +++ /dev/null @@ -1,120 +0,0 @@ -from lnst.Controller.Recipe import BaseRecipe -from lnst.RecipeCommon.PerfResult import MultiRunPerf
-class PerfConf(object):
- def __init__(self,
perf_tool,
test_type,
generator, generator_bind,
receiver, receiver_bind,
msg_size, duration, iterations, streams):
self._perf_tool = perf_tool
self._test_type = test_type
self._generator = generator
self._generator_bind = generator_bind
self._receiver = receiver
self._receiver_bind = receiver_bind
self._msg_size = msg_size
self._duration = duration
self._iterations = iterations
self._streams = streams
- @property
- def perf_tool(self):
return self._perf_tool
- @property
- def generator(self):
return self._generator
- @property
- def generator_bind(self):
return self._generator_bind
- @property
- def receiver(self):
return self._receiver
- @property
- def receiver_bind(self):
return self._receiver_bind
- @property
- def test_type(self):
return self._test_type
- @property
- def msg_size(self):
return self._msg_size
- @property
- def duration(self):
return self._duration
- @property
- def iterations(self):
return self._iterations
- @property
- def streams(self):
return self._streams
-class PerfMeasurementTool(object):
- @staticmethod
- def perf_measure(perf_conf):
raise NotImplementedError
-class PerfTestAndEvaluate(BaseRecipe):
- def perf_test(self, perf_conf):
generator_measurements = MultiRunPerf()
receiver_measurements = MultiRunPerf()
for i in range(perf_conf.iterations):
tx, rx = perf_conf.perf_tool.perf_measure(perf_conf)
if tx:
generator_measurements.append(tx)
if rx:
receiver_measurements.append(rx)
return generator_measurements, receiver_measurements
- def perf_evaluate_and_report(self, perf_conf, results, baseline):
self.perf_evaluate(perf_conf, results, baseline)
self.perf_report(perf_conf, results, baseline)
- def perf_evaluate(self, perf_conf, results, baseline):
generator, receiver = results
if generator.average > 0:
self.add_result(True, "Generator reported non-zero throughput")
else:
self.add_result(False, "Generator reported zero throughput")
if receiver.average > 0:
self.add_result(True, "Receiver reported non-zero throughput")
else:
self.add_result(False, "Receiver reported zero throughput")
- def perf_report(self, perf_conf, results, baseline):
generator, receiver = results
self.add_result(
True,
"Generator measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second"
.format(tput=generator.average,
deviation=generator.std_deviation,
percentage=(generator.std_deviation/generator.average) * 100,
unit=generator.unit),
data = generator)
self.add_result(
True,
"Receiver measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second"
.format(tput=receiver.average,
deviation=receiver.std_deviation,
percentage=(receiver.std_deviation/receiver.average) * 100,
unit=receiver.unit),
data = receiver)
diff --git a/lnst/RecipeCommon/Perf/Recipe.py b/lnst/RecipeCommon/Perf/Recipe.py new file mode 100644 index 0000000..e305310 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Recipe.py @@ -0,0 +1,73 @@ +from lnst.Controller.Recipe import BaseRecipe +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult
+class RecipeConf(object):
- def __init__(self, measurements, iterations):
self._measurements = measurements
self._iterations = iterations
- @property
- def measurements(self):
return self._measurements
- @property
- def iterations(self):
return self._iterations
+class RecipeResults(object):
- def __init__(self, perf_conf):
self._perf_conf = perf_conf
self._results = {}
- @property
- def perf_conf(self):
return self._perf_conf
- @property
- def results(self):
return self._results
- def add_measurement_results(self, measurement, new_results):
aggregated_results = self._results.get(measurement, None)
aggregated_results = measurement.aggregate_results(
aggregated_results, new_results)
self._results[measurement] = aggregated_results
+class Recipe(BaseRecipe):
- def perf_test(self, recipe_conf):
results = RecipeResults(recipe_conf)
for i in range(recipe_conf.iterations):
run_results = []
for measurement in recipe_conf.measurements:
measurement.start()
for measurement in reversed(recipe_conf.measurements):
I don't understand why it needs to be reversed here.
If I start the measurements as: m1 m2 m3 m4 I'd expect to finish them in the same order.
Please explain.
measurement.finish()
for measurement in recipe_conf.measurements:
measurement_results = measurement.collect_results()
results.add_measurement_results(
measurement, measurement_results)
return results
On Thu, Nov 15, 2018 at 10:23:42AM +0100, Jan Tluka wrote:
Wed, Nov 14, 2018 at 04:04:46PM CET, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
Refactoring the Perf and PerfResult modules into a separate package lnst.RecipeCommon.Perf that will host everything related to the Perf recipe template.
I'm also considering later moving this into the lnst.Recipes package where it might make more sense as an actual recipe, with an example test method that will show off the basic usage of the template.
Changes summary:
- moved lnst/RecipeCommon/Perf.py to lnst/RecipeCommon/Perf/Recipe.py
- renamed PerfTestAndEvaluate class to just Recipe since the "Perf" part
is obvious from the namespace
- PerfConf class renamed to RecipeConf
- RecipeConf only contains configuration for the Recipe - the
list of measurements to do and the number of repeats for these
- PerfMeasurementTool removed, this will be replaced by the Measurements
class hierarchy added in the following commit
- added RecipeResults class to store aggregated measurement results
associated with the current Recipe configuration
- moved lnst/RecipeCommon/PerfResults.py to lnst.RecipeCommon/Perf/Results.py
- removed StreamPerf, MultiStreamPerf, MultiRunPerf and replaced them
with SequentialPerfResult and ParallelPerfResult to improve code reuse
- added the PerfResult base class
- set PerfInterval string formatting precision to 2 decimals
- improved code reuse for item validation in PerfList class
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/RecipeCommon/Perf.py | 120 ------------------ lnst/RecipeCommon/Perf/Recipe.py | 73 +++++++++++ .../{PerfResult.py => Perf/Results.py} | 65 ++++------ lnst/RecipeCommon/Perf/__init__.py | 0 lnst/Recipes/ENRT/BaseEnrtRecipe.py | 20 +-- 5 files changed, 106 insertions(+), 172 deletions(-) delete mode 100644 lnst/RecipeCommon/Perf.py create mode 100644 lnst/RecipeCommon/Perf/Recipe.py rename lnst/RecipeCommon/{PerfResult.py => Perf/Results.py} (72%) create mode 100644 lnst/RecipeCommon/Perf/__init__.py
diff --git a/lnst/RecipeCommon/Perf.py b/lnst/RecipeCommon/Perf.py deleted file mode 100644 index 97aa0f1..0000000 --- a/lnst/RecipeCommon/Perf.py +++ /dev/null @@ -1,120 +0,0 @@ -from lnst.Controller.Recipe import BaseRecipe -from lnst.RecipeCommon.PerfResult import MultiRunPerf
-class PerfConf(object):
- def __init__(self,
perf_tool,
test_type,
generator, generator_bind,
receiver, receiver_bind,
msg_size, duration, iterations, streams):
self._perf_tool = perf_tool
self._test_type = test_type
self._generator = generator
self._generator_bind = generator_bind
self._receiver = receiver
self._receiver_bind = receiver_bind
self._msg_size = msg_size
self._duration = duration
self._iterations = iterations
self._streams = streams
- @property
- def perf_tool(self):
return self._perf_tool
- @property
- def generator(self):
return self._generator
- @property
- def generator_bind(self):
return self._generator_bind
- @property
- def receiver(self):
return self._receiver
- @property
- def receiver_bind(self):
return self._receiver_bind
- @property
- def test_type(self):
return self._test_type
- @property
- def msg_size(self):
return self._msg_size
- @property
- def duration(self):
return self._duration
- @property
- def iterations(self):
return self._iterations
- @property
- def streams(self):
return self._streams
-class PerfMeasurementTool(object):
- @staticmethod
- def perf_measure(perf_conf):
raise NotImplementedError
-class PerfTestAndEvaluate(BaseRecipe):
- def perf_test(self, perf_conf):
generator_measurements = MultiRunPerf()
receiver_measurements = MultiRunPerf()
for i in range(perf_conf.iterations):
tx, rx = perf_conf.perf_tool.perf_measure(perf_conf)
if tx:
generator_measurements.append(tx)
if rx:
receiver_measurements.append(rx)
return generator_measurements, receiver_measurements
- def perf_evaluate_and_report(self, perf_conf, results, baseline):
self.perf_evaluate(perf_conf, results, baseline)
self.perf_report(perf_conf, results, baseline)
- def perf_evaluate(self, perf_conf, results, baseline):
generator, receiver = results
if generator.average > 0:
self.add_result(True, "Generator reported non-zero throughput")
else:
self.add_result(False, "Generator reported zero throughput")
if receiver.average > 0:
self.add_result(True, "Receiver reported non-zero throughput")
else:
self.add_result(False, "Receiver reported zero throughput")
- def perf_report(self, perf_conf, results, baseline):
generator, receiver = results
self.add_result(
True,
"Generator measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second"
.format(tput=generator.average,
deviation=generator.std_deviation,
percentage=(generator.std_deviation/generator.average) * 100,
unit=generator.unit),
data = generator)
self.add_result(
True,
"Receiver measured throughput: {tput} +-{deviation}({percentage:.2}%) {unit} per second"
.format(tput=receiver.average,
deviation=receiver.std_deviation,
percentage=(receiver.std_deviation/receiver.average) * 100,
unit=receiver.unit),
data = receiver)
diff --git a/lnst/RecipeCommon/Perf/Recipe.py b/lnst/RecipeCommon/Perf/Recipe.py new file mode 100644 index 0000000..e305310 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Recipe.py @@ -0,0 +1,73 @@ +from lnst.Controller.Recipe import BaseRecipe +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult
+class RecipeConf(object):
- def __init__(self, measurements, iterations):
self._measurements = measurements
self._iterations = iterations
- @property
- def measurements(self):
return self._measurements
- @property
- def iterations(self):
return self._iterations
+class RecipeResults(object):
- def __init__(self, perf_conf):
self._perf_conf = perf_conf
self._results = {}
- @property
- def perf_conf(self):
return self._perf_conf
- @property
- def results(self):
return self._results
- def add_measurement_results(self, measurement, new_results):
aggregated_results = self._results.get(measurement, None)
aggregated_results = measurement.aggregate_results(
aggregated_results, new_results)
self._results[measurement] = aggregated_results
+class Recipe(BaseRecipe):
- def perf_test(self, recipe_conf):
results = RecipeResults(recipe_conf)
for i in range(recipe_conf.iterations):
run_results = []
for measurement in recipe_conf.measurements:
measurement.start()
for measurement in reversed(recipe_conf.measurements):
I don't understand why it needs to be reversed here.
If I start the measurements as: m1 m2 m3 m4 I'd expect to finish them in the same order.
Please explain.
Start and finish isn't perfectly synchronized, specifying an order (m1 m2 m3) makes the tester (at least in my case) feel that they'll start at the same time or at worst in the specified order, in that case I'd expect that m1 starts first and will include measurement for the full duration of m2. And that m2 will include measurement for the full duration of m3 and so on. To do that you need to start them in the specified order and then finish them in the reverse order.
Does that answer the question?
-Ondrej
measurement.finish()
for measurement in recipe_conf.measurements:
measurement_results = measurement.collect_results()
results.add_measurement_results(
measurement, measurement_results)
return results
Thu, Nov 15, 2018 at 10:30:50AM CET, olichtne@redhat.com wrote:
On Thu, Nov 15, 2018 at 10:23:42AM +0100, Jan Tluka wrote:
Wed, Nov 14, 2018 at 04:04:46PM CET, olichtne@redhat.com wrote:
+class Recipe(BaseRecipe):
- def perf_test(self, recipe_conf):
results = RecipeResults(recipe_conf)
for i in range(recipe_conf.iterations):
run_results = []
for measurement in recipe_conf.measurements:
measurement.start()
for measurement in reversed(recipe_conf.measurements):
I don't understand why it needs to be reversed here.
If I start the measurements as: m1 m2 m3 m4 I'd expect to finish them in the same order.
Please explain.
Start and finish isn't perfectly synchronized, specifying an order (m1 m2 m3) makes the tester (at least in my case) feel that they'll start at the same time or at worst in the specified order, in that case I'd expect that m1 starts first and will include measurement for the full duration of m2. And that m2 will include measurement for the full duration of m3 and so on. To do that you need to start them in the specified order and then finish them in the reverse order.
Does that answer the question?
-Ondrej
Not really :-)
So if we consider the "worst case":
reversed finish call | m1.start() ----------| ----- m1.finish() m2.start() -------| -- m2.finish() m3.start() ----| m3.finish()
So m1 lasts much longer than m3. Ideally we want them to last the same time, no?
measurement.finish()
for measurement in recipe_conf.measurements:
measurement_results = measurement.collect_results()
results.add_measurement_results(
measurement, measurement_results)
return results
On Thu, Nov 15, 2018 at 11:38:46AM +0100, Jan Tluka wrote:
Thu, Nov 15, 2018 at 10:30:50AM CET, olichtne@redhat.com wrote:
On Thu, Nov 15, 2018 at 10:23:42AM +0100, Jan Tluka wrote:
Wed, Nov 14, 2018 at 04:04:46PM CET, olichtne@redhat.com wrote:
+class Recipe(BaseRecipe):
- def perf_test(self, recipe_conf):
results = RecipeResults(recipe_conf)
for i in range(recipe_conf.iterations):
run_results = []
for measurement in recipe_conf.measurements:
measurement.start()
for measurement in reversed(recipe_conf.measurements):
I don't understand why it needs to be reversed here.
If I start the measurements as: m1 m2 m3 m4 I'd expect to finish them in the same order.
Please explain.
Start and finish isn't perfectly synchronized, specifying an order (m1 m2 m3) makes the tester (at least in my case) feel that they'll start at the same time or at worst in the specified order, in that case I'd expect that m1 starts first and will include measurement for the full duration of m2. And that m2 will include measurement for the full duration of m3 and so on. To do that you need to start them in the specified order and then finish them in the reverse order.
Does that answer the question?
-Ondrej
Not really :-)
So if we consider the "worst case":
reversed finish call |
m1.start() ----------| ----- m1.finish() m2.start() -------| -- m2.finish() m3.start() ----| m3.finish()
So m1 lasts much longer than m3. Ideally we want them to last the same time, no?
Well, it actually depends on how you look at it, you can have multiple types of measurements: * measure from now, until I say so * measure from now, for this long (e.g. for 60 seconds) * probably others i haven't though of but these are the common ones that we use
the second one still works the same as before: m1.start(10seconds) -----> this will finish 10 seconds from it's start m2.start(60seconds) -----> this will finish 60 seconds from it's start
so if m1 started at 00:00 and m2 started at 00:05 (m1 start takes a long time for example)
then m1 finishes at 00:10-00:15 (depending on when the 10s actually started counting) and m2 finishes at 01:05
then we call: m2.finish() m1.finish() and this just waits for each measurement to report that it's finished, in a specific order yes, but the order doesn't influence when the measurements finish. They finish based on the configuration of how long to measure for. Finish just makes sure they don't take too long and timeout (because of an implementation bug for example).
In the first case, the measurement measures "forever", and is stopped by an action of the controller - when the tester says to stop measuring. This means there's a different event that we wait for. And in that case it makes sense that when you specify: m1.start() m2.start()
that the time period of m1 measurement, includes the time period m2 is running in. YES you ideally want to run everything for the same period of time, but since you really CAN'T synchronize that precisely, in my head it makes sense to have the intervals nicely organized: m1.start() m2.start() m2.finish() m1.finish() results in: <---------------------------------------> #m1 measurement time interval <-------------------------------> #m2 measurement time interval
whereas: m1.start() m2.start() m1.finish() m2.finish() results in: <-------------------------> #m1 measurement time interval <-------------------------------> #m2 measurement time interval
Also... the start time and the finish time don't have to be identical. And the start time and finish time of different measurements can also be different.
And of course, our actual use case is a mix of the 2 types: m1 is a "forever" measurement sampling /proc/stat m2 is a time restricted measurement of a network flow (for 60 seconds) so what you want to end up with is, have a complete measurement of the network flow, and this dictates what time interval is relevant for any other measurement - every other measurement time interval should be a superset of the network flow measurement time interval.
-Ondrej
Thu, Nov 15, 2018 at 12:04:22PM CET, olichtne@redhat.com wrote:
On Thu, Nov 15, 2018 at 11:38:46AM +0100, Jan Tluka wrote:
Thu, Nov 15, 2018 at 10:30:50AM CET, olichtne@redhat.com wrote:
On Thu, Nov 15, 2018 at 10:23:42AM +0100, Jan Tluka wrote:
Wed, Nov 14, 2018 at 04:04:46PM CET, olichtne@redhat.com wrote:
+class Recipe(BaseRecipe):
- def perf_test(self, recipe_conf):
results = RecipeResults(recipe_conf)
for i in range(recipe_conf.iterations):
run_results = []
for measurement in recipe_conf.measurements:
measurement.start()
for measurement in reversed(recipe_conf.measurements):
I don't understand why it needs to be reversed here.
If I start the measurements as: m1 m2 m3 m4 I'd expect to finish them in the same order.
Please explain.
Start and finish isn't perfectly synchronized, specifying an order (m1 m2 m3) makes the tester (at least in my case) feel that they'll start at the same time or at worst in the specified order, in that case I'd expect that m1 starts first and will include measurement for the full duration of m2. And that m2 will include measurement for the full duration of m3 and so on. To do that you need to start them in the specified order and then finish them in the reverse order.
Does that answer the question?
-Ondrej
Not really :-)
So if we consider the "worst case":
reversed finish call |
m1.start() ----------| ----- m1.finish() m2.start() -------| -- m2.finish() m3.start() ----| m3.finish()
So m1 lasts much longer than m3. Ideally we want them to last the same time, no?
Well, it actually depends on how you look at it, you can have multiple types of measurements:
- measure from now, until I say so
- measure from now, for this long (e.g. for 60 seconds)
- probably others i haven't though of but these are the common ones that
we use
the second one still works the same as before: m1.start(10seconds) -----> this will finish 10 seconds from it's start m2.start(60seconds) -----> this will finish 60 seconds from it's start
so if m1 started at 00:00 and m2 started at 00:05 (m1 start takes a long time for example)
then m1 finishes at 00:10-00:15 (depending on when the 10s actually started counting) and m2 finishes at 01:05
then we call: m2.finish() m1.finish() and this just waits for each measurement to report that it's finished, in a specific order yes, but the order doesn't influence when the measurements finish. They finish based on the configuration of how long to measure for. Finish just makes sure they don't take too long and timeout (because of an implementation bug for example).
Ok, I finally understood how this works. I got confused because I thought that each of the measurement is meant to be individual network stream test, where I'd probably don't want to end the iperf started as the last one to be finished as the first one (just to let each of them run the same amount of time).
I looked at BaseENRTRecipe and I see that the first measurement (m1) is a CPU measurement (that is not limited by timeout, runs indefinitely) and the second (m2) collects data of the network performance tool.
So it makes sense to have m2 finished before m1.
Still it's somehow expected here that the first measurement is the "driving" (and time unlimited) one. And that confused me.
In the first case, the measurement measures "forever", and is stopped by an action of the controller - when the tester says to stop measuring. This means there's a different event that we wait for. And in that case it makes sense that when you specify: m1.start() m2.start()
that the time period of m1 measurement, includes the time period m2 is running in. YES you ideally want to run everything for the same period of time, but since you really CAN'T synchronize that precisely, in my head it makes sense to have the intervals nicely organized: m1.start() m2.start() m2.finish() m1.finish() results in: <---------------------------------------> #m1 measurement time interval <-------------------------------> #m2 measurement time interval
whereas: m1.start() m2.start() m1.finish() m2.finish() results in: <-------------------------> #m1 measurement time interval <-------------------------------> #m2 measurement time interval
Also... the start time and the finish time don't have to be identical. And the start time and finish time of different measurements can also be different.
And of course, our actual use case is a mix of the 2 types: m1 is a "forever" measurement sampling /proc/stat m2 is a time restricted measurement of a network flow (for 60 seconds) so what you want to end up with is, have a complete measurement of the network flow, and this dictates what time interval is relevant for any other measurement - every other measurement time interval should be a superset of the network flow measurement time interval.
-Ondrej
From: Ondrej Lichtner olichtne@redhat.com
This is the second part of the refactorization of the PerfAndEvaluate recipe workflow. In generic terms it introduces a new package that will store a class hierarchy for various Measurement types and implementations.
At the base level there is the BaseMeasurement class and module that defines the interface that all the other classes have to implement. This interface is understood and relied upon by the lnst.RecipeCommon.Perf.Recipe class that uses it.
The refactorization includes a move+rename of the IperfMeasurementTool and TRexMeasurementTool into the new IperfFlowMeasurement and TRexMeasurement classes/modules. And the addition of the new StatCPUMeasurement class that uses the CPUStatMonitor test module to measure cpu utilization.
Finally these changes are added to the BaseEnrtRecipe so that everything stays working.
v2: * fixed typo in IperfFlowMeasurement - the unit for cpu utilization returned by Iperf should be "cpu_percent".
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/IperfMeasurementTool.py | 83 ------- .../Perf/Measurements/BaseCPUMeasurement.py | 109 ++++++++++ .../Perf/Measurements/BaseFlowMeasurement.py | 202 ++++++++++++++++++ .../Perf/Measurements/BaseMeasurement.py | 29 +++ .../Perf/Measurements/IperfFlowMeasurement.py | 157 ++++++++++++++ .../Perf/Measurements/MeasurementError.py | 4 + .../Perf/Measurements/StatCPUMeasurement.py | 88 ++++++++ .../Measurements/TRexMeasurement.py} | 0 .../Perf/Measurements/__init__.py | 3 + lnst/Recipes/ENRT/BaseEnrtRecipe.py | 27 ++- 10 files changed, 614 insertions(+), 88 deletions(-) delete mode 100644 lnst/RecipeCommon/IperfMeasurementTool.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/MeasurementError.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py rename lnst/RecipeCommon/{TRexMeasurementTool.py => Perf/Measurements/TRexMeasurement.py} (100%) create mode 100644 lnst/RecipeCommon/Perf/Measurements/__init__.py
diff --git a/lnst/RecipeCommon/IperfMeasurementTool.py b/lnst/RecipeCommon/IperfMeasurementTool.py deleted file mode 100644 index 9f2e49e..0000000 --- a/lnst/RecipeCommon/IperfMeasurementTool.py +++ /dev/null @@ -1,83 +0,0 @@ -import time -import signal -from lnst.Common.IpAddress import ipaddress -from lnst.Controller.Recipe import RecipeError -from lnst.Controller.RecipeResults import ResultLevel -from lnst.RecipeCommon.Perf import PerfConf, PerfMeasurementTool -from lnst.RecipeCommon.PerfResult import PerfInterval, StreamPerf -from lnst.RecipeCommon.PerfResult import MultiStreamPerf -from lnst.Tests.Iperf import IperfClient, IperfServer - -class IperfMeasurementTool(PerfMeasurementTool): - @staticmethod - def perf_measure(perf_conf): - _iperf_duration_overhead = 5 - - server_params = dict(bind = ipaddress(perf_conf.receiver_bind), - oneoff = True) - - client_params = dict(server = server_params["bind"], - duration = perf_conf.duration, - parallel = perf_conf.streams) - - if perf_conf.test_type == "tcp_stream": - #tcp stream is the default for iperf3 - pass - elif perf_conf.test_type == "udp_stream": - client_params["udp"] = True - elif perf_conf.test_type == "sctp_stream": - client_params["sctp"] = True - else: - raise RecipeError("Unsupported test type '{}'" - .format(perf_conf.test_type)) - - server = IperfServer(**server_params) - client = IperfClient(**client_params) - - server_host = perf_conf.receiver - client_host = perf_conf.generator - result = None - try: - server_job = server_host.run(server, bg=True, - job_level=ResultLevel.NORMAL) - - #wait for server to start, TODO can this be improved? - time.sleep(2) - - duration = client.params.duration + _iperf_duration_overhead - client_job = client_host.run(client, timeout=duration, - job_level=ResultLevel.NORMAL) - - server_job.wait(timeout=5) - finally: - if client_job and not client_job.finished: - client_job.kill() - - if server_job and not server_job.finished: - server_job.kill() - - #TODO return something if not passed - if client_job.passed: - client_result = MultiStreamPerf() - for i in client_job.result["data"]["end"]["streams"]: - client_result.append(StreamPerf()) - - for interval in client_job.result["data"]["intervals"]: - for i, stream in enumerate(interval["streams"]): - client_result[i].append(PerfInterval(stream["bytes"] * 8, - stream["seconds"], - "bits")) - - #TODO return something if not passed - if server_job.passed: - server_result = MultiStreamPerf() - for i in server_job.result["data"]["end"]["streams"]: - server_result.append(StreamPerf()) - - for interval in server_job.result["data"]["intervals"]: - for i, stream in enumerate(interval["streams"]): - server_result[i].append(PerfInterval(stream["bytes"] * 8, - stream["seconds"], - "bits")) - - return client_result, server_result diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py new file mode 100644 index 0000000..2507f3c --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py @@ -0,0 +1,109 @@ +import signal +from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult + +class CPUMeasurementResults(object): + def __init__(self, host, cpu): + self._host = host + self._cpu = cpu + + @property + def host(self): + return self._host + + @property + def cpu(self): + return self._cpu + + @property + def utilization(self): + raise NotImplementedError() + +class AggregatedCPUMeasurementResults(CPUMeasurementResults): + def __init__(self, host, cpu): + super(AggregatedCPUMeasurementResults, self).__init__(host, cpu) + self._individual_results = [] + + @property + def individual_results(self): + return self._individual_results + + @property + def utilization(self): + return SequentialPerfResult([i.utilization + for i in self.individual_results]) + + def add_results(self, results): + if results is None: + return + elif isinstance(results, AggregatedCPUMeasurementResults): + self.individual_results.extend(results.individual_results) + elif isinstance(results, CPUMeasurementResults): + self.individual_results.append(results) + else: + raise MeasurementError("Adding incorrect results.") + +class BaseCPUMeasurement(BaseMeasurement): + @classmethod + def aggregate_results(cls, old, new): + aggregated = [] + if old is None: + old = [None] * len(new) + for old_measurements, new_measurements in zip(old, new): + aggregated.append(cls._aggregate_hostcpu_results( + old_measurements, new_measurements)) + return aggregated + + @classmethod + def report_results(cls, recipe, results): + results_by_host = cls._divide_results_by_host(results) + for host_results in results_by_host.values(): + cls._report_host_results(recipe, host_results) + + @classmethod + def evaluate_results(cls, recipe, results): + #TODO split off into a separate evaluator class + for result in results: + recipe.add_result(True, + "Base CPU evaluation for host {}, cpu {}".format( + result.host.hostid, result.cpu)) + + @classmethod + def _divide_results_by_host(cls, results): + results_by_host = {} + for result in results: + if result.host not in results_by_host: + results_by_host[result.host] = [] + results_by_host[result.host].append(result) + return results_by_host + + @classmethod + def _report_host_results(cls, recipe, results): + if not len(results): + return + + cpu_data = {} + desc = ["CPU Utilization on host {host}:".format( + host=results[0].host.hostid)] + for result in results: + utilization = result.utilization + cpu_data[result.cpu] = utilization + desc.append("cpu '{cpu}': {average:.2f} +-{deviation:.2f} {unit} per second" + .format(cpu=result.cpu, + average=utilization.average, + deviation=utilization.std_deviation, + unit=utilization.unit)) + + recipe.add_result(True, "\n".join(desc), data=cpu_data) + + @classmethod + def _aggregate_hostcpu_results(cls, old, new): + if (old is not None and + (old.host is not new.host or old.cpu != new.cpu)): + raise MeasurementError("Aggregating incompatible CPU Results") + + new_result = AggregatedCPUMeasurementResults(new.host, new.cpu) + new_result.add_results(old) + new_result.add_results(new) + return new_result diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py new file mode 100644 index 0000000..203e104 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py @@ -0,0 +1,202 @@ +import signal +from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult + +class Flow(object): + def __init__(self, + type, + generator, generator_bind, + receiver, receiver_bind, + msg_size, duration, parallel_streams): + self._type = type + + self._generator = generator + self._generator_bind = generator_bind + self._receiver = receiver + self._receiver_bind = receiver_bind + + self._msg_size = msg_size + self._duration = duration + self._parallel_streams = parallel_streams + + @property + def type(self): + return self._type + + @property + def generator(self): + return self._generator + + @property + def generator_bind(self): + return self._generator_bind + + @property + def receiver(self): + return self._receiver + + @property + def receiver_bind(self): + return self._receiver_bind + + @property + def msg_size(self): + return self._msg_size + + @property + def duration(self): + return self._duration + + @property + def parallel_streams(self): + return self._parallel_streams + +class FlowMeasurementResults(object): + def __init__(self, flow): + self._flow = flow + self._generator_results = None + self._generator_cpu_stats = None + self._receiver_results = None + self._receiver_cpu_stats = None + + @property + def flow(self): + return self._flow + + @property + def generator_results(self): + return self._generator_results + + @generator_results.setter + def generator_results(self, value): + self._generator_results = value + + @property + def generator_cpu_stats(self): + return self._generator_cpu_stats + + @generator_cpu_stats.setter + def generator_cpu_stats(self, value): + self._generator_cpu_stats = value + + @property + def receiver_results(self): + return self._receiver_results + + @receiver_results.setter + def receiver_results(self, value): + self._receiver_results = value + + @property + def receiver_cpu_stats(self): + return self._receiver_cpu_stats + + @receiver_cpu_stats.setter + def receiver_cpu_stats(self, value): + self._receiver_cpu_stats = value + +class AggregatedFlowMeasurementResults(FlowMeasurementResults): + def __init__(self, flow): + self._flow = flow + self._generator_results = SequentialPerfResult() + self._generator_cpu_stats = SequentialPerfResult() + self._receiver_results = SequentialPerfResult() + self._receiver_cpu_stats = SequentialPerfResult() + self._individual_results = [] + + @property + def individual_results(self): + return self._individual_results + + def add_results(self, results): + if results is None: + return + elif isinstance(results, AggregatedFlowMeasurementResults): + self.individual_results.extend(results.individual_results) + self.generator_results.extend(results.generator_results) + self.generator_cpu_stats.extend(results.generator_cpu_stats) + self.receiver_results.extend(results.receiver_results) + self.receiver_cpu_stats.extend(results.receiver_cpu_stats) + elif isinstance(results, FlowMeasurementResults): + self.individual_results.append(results) + self.generator_results.append(results.generator_results) + self.generator_cpu_stats.append(results.generator_cpu_stats) + self.receiver_results.append(results.receiver_results) + self.receiver_cpu_stats.append(results.receiver_cpu_stats) + else: + raise MeasurementError("Adding incorrect results.") + +class BaseFlowMeasurement(BaseMeasurement): + @classmethod + def report_results(cls, recipe, results): + for flow_results in results: + cls._report_flow_results(recipe, flow_results) + + @classmethod + def evaluate_results(cls, recipe, results): + #TODO split off into a separate evaluator class + for flow_results in results: + if flow_results.generator_results.average > 0: + recipe.add_result(True, "Generator reported non-zero throughput") + else: + recipe.add_result(False, "Generator reported zero throughput") + + if flow_results.receiver_results.average > 0: + recipe.add_result(True, "Receiver reported non-zero throughput") + else: + recipe.add_result(False, "Receiver reported zero throughput") + + @classmethod + def _report_flow_results(cls, recipe, flow_results): + generator = flow_results.generator_results + generator_cpu = flow_results.generator_cpu_stats + receiver = flow_results.receiver_results + receiver_cpu = flow_results.receiver_cpu_stats + + desc = [] + desc.append("Generator measured throughput: {tput:.2f} +-{deviation:.2f}({percentage:.2f}%) {unit} per second." + .format(tput=generator.average, + deviation=generator.std_deviation, + percentage=(generator.std_deviation/generator.average) * 100, + unit=generator.unit)) + desc.append("Generator process CPU data: {cpu:.2f} +-{cpu_deviation:.2f} {cpu_unit} per second." + .format(cpu=generator_cpu.average, + cpu_deviation=generator_cpu.std_deviation, + cpu_unit=generator_cpu.unit)) + desc.append("Receiver measured throughput: {tput:.2f} +-{deviation:.2f}({percentage:.2}%) {unit} per second." + .format(tput=receiver.average, + deviation=receiver.std_deviation, + percentage=(receiver.std_deviation/receiver.average) * 100, + unit=receiver.unit)) + desc.append("Receiver process CPU data: {cpu:.2f} +-{cpu_deviation:.2f} {cpu_unit} per second." + .format(cpu=receiver_cpu.average, + cpu_deviation=receiver_cpu.std_deviation, + cpu_unit=receiver_cpu.unit)) + + #TODO add flow description + recipe.add_result(True, "\n".join(desc), data = dict( + generator_flow_data=generator, + generator_cpu_data=generator_cpu, + receiver_flow_data=receiver, + receiver_cpu_data=receiver_cpu)) + + @classmethod + def aggregate_results(cls, old, new): + aggregated = [] + if old is None: + old = [None] * len(new) + for old_flow, new_flow in zip(old, new): + aggregated.append(cls._aggregate_flows(old_flow, new_flow)) + return aggregated + + @classmethod + def _aggregate_flows(cls, old_flow, new_flow): + if old_flow is not None and old_flow.flow is not new_flow.flow: + raise MeasurementError("Aggregating incompatible Flows") + + new_result = AggregatedFlowMeasurementResults(new_flow.flow) + + new_result.add_results(old_flow) + new_result.add_results(new_flow) + return new_result diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py new file mode 100644 index 0000000..8059308 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py @@ -0,0 +1,29 @@ +class BaseMeasurement(object): + def __init__(self, conf): + self._conf = conf + + @property + def conf(self): + return self._conf + + def start(self): + raise NotImplementedError() + + def finish(self): + raise NotImplementedError() + + def collect_results(self): + raise NotImplementedError() + + @classmethod + def report_results(recipe, results): + raise NotImplementedError() + + @classmethod + def evaluate_results(recipe, results): + #TODO split off into separate evaluator classes + raise NotImplementedError() + + @classmethod + def aggregate_results(first, second): + raise NotImplementedError() diff --git a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py new file mode 100644 index 0000000..c792e9d --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py @@ -0,0 +1,157 @@ +import time + +from lnst.Common.IpAddress import ipaddress + +from lnst.Controller.Recipe import RecipeError +from lnst.Controller.RecipeResults import ResultLevel + +from lnst.RecipeCommon.Perf.Results import PerfInterval +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import BaseFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import FlowMeasurementResults + +from lnst.Tests.Iperf import IperfClient, IperfServer + +class IperfFlowMeasurement(BaseFlowMeasurement): + def __init__(self, *args): + super(IperfFlowMeasurement, self).__init__(*args) + self._running_measurements = [] + self._finished_measurements = [] + + def start(self): + if len(self._running_measurements) > 0: + raise MeasurementError("Measurement already running!") + + test_flows = self._prepare_test_flows(self._conf) + + result = None + for flow in test_flows: + flow.server_job.start(bg=True) + + for flow in test_flows: + flow.client_job.start(bg=True) + + self._running_measurements = test_flows + + def finish(self): + test_flows = self._running_measurements + try: + for flow in test_flows: + client_iperf = flow.client_job.what + flow.client_job.wait(timeout=client_iperf.runtime_estimate()) + flow.server_job.wait(timeout=5) + finally: + for flow in test_flows: + if not flow.server_job.finished: + flow.server_job.kill() + if not flow.client_job.finished: + flow.client_job.kill() + + self._running_measurements = [] + self._finished_measurements = test_flows + + def collect_results(self): + test_flows = self._finished_measurements + + results = [] + for test_flow in test_flows: + flow_results = FlowMeasurementResults(test_flow.flow) + flow_results.generator_results = self._parse_job_streams( + test_flow.client_job) + flow_results.generator_cpu_stats = self._parse_job_cpu( + test_flow.client_job) + + flow_results.receiver_results = self._parse_job_streams( + test_flow.server_job) + flow_results.receiver_cpu_stats = self._parse_job_cpu( + test_flow.server_job) + + results.append(flow_results) + + return results + + def _prepare_test_flows(self, flows): + test_flows = [] + for flow in flows: + server_job = self._prepare_server(flow) + client_job = self._prepare_client(flow) + test_flow = NetworkFlowTest(flow, server_job, client_job) + test_flows.append(test_flow) + return test_flows + + def _prepare_server(self, flow): + host = flow.receiver + server_params = dict(bind = ipaddress(flow.receiver_bind), + oneoff = True) + + return host.prepare_job(IperfServer(**server_params), + job_level=ResultLevel.NORMAL) + + def _prepare_client(self, flow): + host = flow.generator + client_params = dict(server = ipaddress(flow.receiver_bind), + duration = flow.duration) + + if flow.type == "tcp_stream": + #tcp stream is the default for iperf3 + pass + elif flow.type == "udp_stream": + client_params["udp"] = True + elif flow.type == "sctp_stream": + client_params["sctp"] = True + else: + raise RecipeError("Unsupported flow type '{}'".format(flow.type)) + + if flow.parallel_streams > 1: + client_params["parallel"] = flow.parallel_streams + + if flow.msg_size: + client_params["blksize"] = flow.msg_size + + return host.prepare_job(IperfClient(**client_params), + job_level=ResultLevel.NORMAL) + + def _parse_job_streams(self, job): + result = ParallelPerfResult() + if not job.passed: + result.append(PerfInterval(0, 0, "bits")) + else: + for i in job.result["data"]["end"]["streams"]: + result.append(SequentialPerfResult()) + + for interval in job.result["data"]["intervals"]: + for i, stream in enumerate(interval["streams"]): + result[i].append(PerfInterval(stream["bytes"] * 8, + stream["seconds"], + "bits")) + return result + + def _parse_job_cpu(self, job): + if not job.passed: + return PerfInterval(0, 0, "cpu_percent") + else: + cpu_percent = job.result["data"]["end"]["cpu_utilization_percent"]["host_total"] + return PerfInterval(cpu_percent, 1, "cpu_percent") + +class NetworkFlowTest(object): + def __init__(self, flow, server_job, client_job): + self._flow = flow + self._server_job = server_job + self._client_job = client_job + + @property + def flow(self): + return self._flow + + @property + def server_job(self): + return self._server_job + + @property + def client_job(self): + return self._client_job + + @property + def duration(self): + return self._flow.duration diff --git a/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py b/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py new file mode 100644 index 0000000..66ed168 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py @@ -0,0 +1,4 @@ +from lnst.Common.LnstError import LnstError + +class MeasurementError(LnstError): + pass diff --git a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py new file mode 100644 index 0000000..14e7f73 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py @@ -0,0 +1,88 @@ +import signal + +from lnst.RecipeCommon.Perf.Results import PerfInterval +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult +from lnst.RecipeCommon.Perf.Measurements.BaseCPUMeasurement import BaseCPUMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseCPUMeasurement import CPUMeasurementResults + +from lnst.Tests.CPUStatMonitor import CPUStatMonitor + +class StatCPUMeasurementResults(CPUMeasurementResults): + def __init__(self, *args): + super(StatCPUMeasurementResults, self).__init__(*args) + self._data = {} + + def update_intervals(self, intervals): + for key, interval in intervals.items(): + if key not in self._data: + self._data[key] = SequentialPerfResult() + self._data[key].append(interval) + + @property + def utilization(self): + return ParallelPerfResult([self._data["user"], self._data["nice"], + self._data["system"], self._data["irq"], self._data["softirq"], + self._data["steal"]]) + +class StatCPUMeasurement(BaseCPUMeasurement): + def __init__(self, *args): + super(StatCPUMeasurement, self).__init__(*args) + self._running_measurements = [] + self._finished_measurements = [] + + def start(self): + jobs = [] + for host in self._conf: + jobs.append(host.run(CPUStatMonitor(interval=1000),bg=True)) + self._running_measurements = jobs + + def finish(self): + jobs = self._running_measurements + try: + for job in jobs: + job.kill(signal.SIGINT) + job.wait() + finally: + for job in jobs: + if not job.finished: + job.kill() + + self._running_measurements = [] + self._finished_measurements = jobs + + def collect_results(self): + results = [] + for job in self._finished_measurements: + job_results = self._process_job(job) + results.extend(job_results) + + return results + + def _process_job(self, job): + host = job.host + job_results = {} + for sample in job.result["data"]: + parsed_sample = self._parse_sample(sample) + + for cpu, cpu_intervals in parsed_sample.items(): + if cpu not in job_results: + job_results[cpu] = StatCPUMeasurementResults(host, cpu) + cpu_results = job_results[cpu] + cpu_results.update_intervals(cpu_intervals) + + return job_results.values() + + def _parse_sample(self, sample): + result = {} + duration = sample["duration"] + for key, value in sample.items(): + if key.startswith("cpu"): + result[key] = self._create_cpu_intervals(duration, value) + return result + + def _create_cpu_intervals(self, duration, cpu_intervals): + result = {} + for key, value in cpu_intervals.items(): + result[key] = PerfInterval(value, duration, "time units") + return result diff --git a/lnst/RecipeCommon/TRexMeasurementTool.py b/lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py similarity index 100% rename from lnst/RecipeCommon/TRexMeasurementTool.py rename to lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py diff --git a/lnst/RecipeCommon/Perf/Measurements/__init__.py b/lnst/RecipeCommon/Perf/Measurements/__init__.py new file mode 100644 index 0000000..781e641 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/__init__.py @@ -0,0 +1,3 @@ +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import Flow +from lnst.RecipeCommon.Perf.Measurements.IperfFlowMeasurement import IperfFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.StatCPUMeasurement import StatCPUMeasurement diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index a26d999..d7d1aec 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -1,4 +1,3 @@ - from lnst.Common.LnstError import LnstError from lnst.Common.Parameters import Param, IntParam, StrParam, BoolParam from lnst.Common.IpAddress import AF_INET, AF_INET6 @@ -8,7 +7,9 @@ from lnst.Controller.Recipe import BaseRecipe from lnst.RecipeCommon.Ping import PingTestAndEvaluate, PingConf from lnst.RecipeCommon.Perf.Recipe import Recipe as PerfRecipe from lnst.RecipeCommon.Perf.Recipe import RecipeConf as PerfRecipeConf -from lnst.RecipeCommon.IperfMeasurementTool import IperfMeasurementTool +from lnst.RecipeCommon.Perf.Measurements import Flow as PerfFlow +from lnst.RecipeCommon.Perf.Measurements import IperfFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements import StatCPUMeasurement
class EnrtConfiguration(object): def __init__(self): @@ -79,14 +80,16 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe):
perf_duration = IntParam(default=60) perf_iterations = IntParam(default=5) - perf_streams = IntParam(default=1) + perf_parallel_streams = IntParam(default=1) perf_msg_size = IntParam(default=123)
perf_usr_comment = StrParam(default="")
perf_max_deviation = IntParam(default=10) #TODO required?
- perf_tool = Param(default=IperfMeasurementTool) + net_perf_tool = Param(default=IperfFlowMeasurement) + + cpu_perf_tool = Param(default=StatCPUMeasurement)
def test(self): main_config = self.test_wide_configuration() @@ -188,8 +191,22 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe): server_bind = server_nic.ips_filter(family=family)[0]
for perf_test in self.params.perf_tests: + flow = PerfFlow( + type = perf_test, + generator = client_netns, + generator_bind = client_bind, + receiver = server_netns, + receiver_bind = server_bind, + msg_size = self.params.perf_msg_size, + duration = self.params.perf_duration, + parallel_streams = self.params.perf_parallel_streams) + + flow_measurement = self.params.net_perf_tool([flow]) yield PerfRecipeConf( - measurements=[ ], + measurements=[ + self.params.cpu_perf_tool([client_netns, server_netns]), + flow_measurement + ], iterations=self.params.perf_iterations)
def _pin_dev_interrupts(self, dev, cpu):
Wed, Nov 14, 2018 at 04:04:47PM CET, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
This is the second part of the refactorization of the PerfAndEvaluate recipe workflow. In generic terms it introduces a new package that will store a class hierarchy for various Measurement types and implementations.
At the base level there is the BaseMeasurement class and module that defines the interface that all the other classes have to implement. This interface is understood and relied upon by the lnst.RecipeCommon.Perf.Recipe class that uses it.
The refactorization includes a move+rename of the IperfMeasurementTool and TRexMeasurementTool into the new IperfFlowMeasurement and TRexMeasurement classes/modules. And the addition of the new StatCPUMeasurement class that uses the CPUStatMonitor test module to measure cpu utilization.
Finally these changes are added to the BaseEnrtRecipe so that everything stays working.
v2:
- fixed typo in IperfFlowMeasurement - the unit for cpu utilization returned by Iperf should be "cpu_percent".
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/RecipeCommon/IperfMeasurementTool.py | 83 ------- .../Perf/Measurements/BaseCPUMeasurement.py | 109 ++++++++++ .../Perf/Measurements/BaseFlowMeasurement.py | 202 ++++++++++++++++++ .../Perf/Measurements/BaseMeasurement.py | 29 +++ .../Perf/Measurements/IperfFlowMeasurement.py | 157 ++++++++++++++ .../Perf/Measurements/MeasurementError.py | 4 + .../Perf/Measurements/StatCPUMeasurement.py | 88 ++++++++ .../Measurements/TRexMeasurement.py} | 0 .../Perf/Measurements/__init__.py | 3 + lnst/Recipes/ENRT/BaseEnrtRecipe.py | 27 ++- 10 files changed, 614 insertions(+), 88 deletions(-) delete mode 100644 lnst/RecipeCommon/IperfMeasurementTool.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/MeasurementError.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py rename lnst/RecipeCommon/{TRexMeasurementTool.py => Perf/Measurements/TRexMeasurement.py} (100%) create mode 100644 lnst/RecipeCommon/Perf/Measurements/__init__.py
diff --git a/lnst/RecipeCommon/IperfMeasurementTool.py b/lnst/RecipeCommon/IperfMeasurementTool.py deleted file mode 100644 index 9f2e49e..0000000 --- a/lnst/RecipeCommon/IperfMeasurementTool.py +++ /dev/null @@ -1,83 +0,0 @@ -import time -import signal -from lnst.Common.IpAddress import ipaddress -from lnst.Controller.Recipe import RecipeError -from lnst.Controller.RecipeResults import ResultLevel -from lnst.RecipeCommon.Perf import PerfConf, PerfMeasurementTool -from lnst.RecipeCommon.PerfResult import PerfInterval, StreamPerf -from lnst.RecipeCommon.PerfResult import MultiStreamPerf -from lnst.Tests.Iperf import IperfClient, IperfServer
-class IperfMeasurementTool(PerfMeasurementTool):
- @staticmethod
- def perf_measure(perf_conf):
_iperf_duration_overhead = 5
server_params = dict(bind = ipaddress(perf_conf.receiver_bind),
oneoff = True)
client_params = dict(server = server_params["bind"],
duration = perf_conf.duration,
parallel = perf_conf.streams)
if perf_conf.test_type == "tcp_stream":
#tcp stream is the default for iperf3
pass
elif perf_conf.test_type == "udp_stream":
client_params["udp"] = True
elif perf_conf.test_type == "sctp_stream":
client_params["sctp"] = True
else:
raise RecipeError("Unsupported test type '{}'"
.format(perf_conf.test_type))
server = IperfServer(**server_params)
client = IperfClient(**client_params)
server_host = perf_conf.receiver
client_host = perf_conf.generator
result = None
try:
server_job = server_host.run(server, bg=True,
job_level=ResultLevel.NORMAL)
#wait for server to start, TODO can this be improved?
time.sleep(2)
duration = client.params.duration + _iperf_duration_overhead
client_job = client_host.run(client, timeout=duration,
job_level=ResultLevel.NORMAL)
server_job.wait(timeout=5)
finally:
if client_job and not client_job.finished:
client_job.kill()
if server_job and not server_job.finished:
server_job.kill()
#TODO return something if not passed
if client_job.passed:
client_result = MultiStreamPerf()
for i in client_job.result["data"]["end"]["streams"]:
client_result.append(StreamPerf())
for interval in client_job.result["data"]["intervals"]:
for i, stream in enumerate(interval["streams"]):
client_result[i].append(PerfInterval(stream["bytes"] * 8,
stream["seconds"],
"bits"))
#TODO return something if not passed
if server_job.passed:
server_result = MultiStreamPerf()
for i in server_job.result["data"]["end"]["streams"]:
server_result.append(StreamPerf())
for interval in server_job.result["data"]["intervals"]:
for i, stream in enumerate(interval["streams"]):
server_result[i].append(PerfInterval(stream["bytes"] * 8,
stream["seconds"],
"bits"))
return client_result, server_result
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py new file mode 100644 index 0000000..2507f3c --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py @@ -0,0 +1,109 @@ +import signal +from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult
+class CPUMeasurementResults(object):
- def __init__(self, host, cpu):
self._host = host
self._cpu = cpu
- @property
- def host(self):
return self._host
- @property
- def cpu(self):
return self._cpu
- @property
- def utilization(self):
raise NotImplementedError()
+class AggregatedCPUMeasurementResults(CPUMeasurementResults):
- def __init__(self, host, cpu):
super(AggregatedCPUMeasurementResults, self).__init__(host, cpu)
self._individual_results = []
- @property
- def individual_results(self):
return self._individual_results
- @property
- def utilization(self):
return SequentialPerfResult([i.utilization
for i in self.individual_results])
- def add_results(self, results):
if results is None:
return
elif isinstance(results, AggregatedCPUMeasurementResults):
self.individual_results.extend(results.individual_results)
elif isinstance(results, CPUMeasurementResults):
self.individual_results.append(results)
else:
raise MeasurementError("Adding incorrect results.")
+class BaseCPUMeasurement(BaseMeasurement):
- @classmethod
- def aggregate_results(cls, old, new):
aggregated = []
if old is None:
old = [None] * len(new)
for old_measurements, new_measurements in zip(old, new):
aggregated.append(cls._aggregate_hostcpu_results(
old_measurements, new_measurements))
return aggregated
- @classmethod
- def report_results(cls, recipe, results):
results_by_host = cls._divide_results_by_host(results)
for host_results in results_by_host.values():
cls._report_host_results(recipe, host_results)
- @classmethod
- def evaluate_results(cls, recipe, results):
#TODO split off into a separate evaluator class
for result in results:
recipe.add_result(True,
"Base CPU evaluation for host {}, cpu {}".format(
result.host.hostid, result.cpu))
- @classmethod
- def _divide_results_by_host(cls, results):
results_by_host = {}
for result in results:
if result.host not in results_by_host:
results_by_host[result.host] = []
results_by_host[result.host].append(result)
return results_by_host
- @classmethod
- def _report_host_results(cls, recipe, results):
if not len(results):
return
cpu_data = {}
desc = ["CPU Utilization on host {host}:".format(
host=results[0].host.hostid)]
for result in results:
utilization = result.utilization
cpu_data[result.cpu] = utilization
desc.append("cpu '{cpu}': {average:.2f} +-{deviation:.2f} {unit} per second"
.format(cpu=result.cpu,
average=utilization.average,
deviation=utilization.std_deviation,
unit=utilization.unit))
recipe.add_result(True, "\n".join(desc), data=cpu_data)
- @classmethod
- def _aggregate_hostcpu_results(cls, old, new):
if (old is not None and
(old.host is not new.host or old.cpu != new.cpu)):
raise MeasurementError("Aggregating incompatible CPU Results")
new_result = AggregatedCPUMeasurementResults(new.host, new.cpu)
new_result.add_results(old)
new_result.add_results(new)
return new_result
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py new file mode 100644 index 0000000..203e104 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py @@ -0,0 +1,202 @@ +import signal +from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult
+class Flow(object):
- def __init__(self,
type,
generator, generator_bind,
receiver, receiver_bind,
msg_size, duration, parallel_streams):
self._type = type
self._generator = generator
self._generator_bind = generator_bind
self._receiver = receiver
self._receiver_bind = receiver_bind
self._msg_size = msg_size
self._duration = duration
self._parallel_streams = parallel_streams
- @property
- def type(self):
return self._type
- @property
- def generator(self):
return self._generator
- @property
- def generator_bind(self):
return self._generator_bind
- @property
- def receiver(self):
return self._receiver
- @property
- def receiver_bind(self):
return self._receiver_bind
- @property
- def msg_size(self):
return self._msg_size
- @property
- def duration(self):
return self._duration
- @property
- def parallel_streams(self):
return self._parallel_streams
+class FlowMeasurementResults(object):
- def __init__(self, flow):
self._flow = flow
self._generator_results = None
self._generator_cpu_stats = None
self._receiver_results = None
self._receiver_cpu_stats = None
- @property
- def flow(self):
return self._flow
- @property
- def generator_results(self):
return self._generator_results
- @generator_results.setter
- def generator_results(self, value):
self._generator_results = value
- @property
- def generator_cpu_stats(self):
return self._generator_cpu_stats
- @generator_cpu_stats.setter
- def generator_cpu_stats(self, value):
self._generator_cpu_stats = value
- @property
- def receiver_results(self):
return self._receiver_results
- @receiver_results.setter
- def receiver_results(self, value):
self._receiver_results = value
- @property
- def receiver_cpu_stats(self):
return self._receiver_cpu_stats
- @receiver_cpu_stats.setter
- def receiver_cpu_stats(self, value):
self._receiver_cpu_stats = value
+class AggregatedFlowMeasurementResults(FlowMeasurementResults):
- def __init__(self, flow):
self._flow = flow
self._generator_results = SequentialPerfResult()
self._generator_cpu_stats = SequentialPerfResult()
self._receiver_results = SequentialPerfResult()
self._receiver_cpu_stats = SequentialPerfResult()
self._individual_results = []
- @property
- def individual_results(self):
return self._individual_results
- def add_results(self, results):
if results is None:
return
elif isinstance(results, AggregatedFlowMeasurementResults):
self.individual_results.extend(results.individual_results)
self.generator_results.extend(results.generator_results)
self.generator_cpu_stats.extend(results.generator_cpu_stats)
self.receiver_results.extend(results.receiver_results)
self.receiver_cpu_stats.extend(results.receiver_cpu_stats)
elif isinstance(results, FlowMeasurementResults):
self.individual_results.append(results)
^^^^^^^ Should not this be append(results.individual_results) ?
self.generator_results.append(results.generator_results)
self.generator_cpu_stats.append(results.generator_cpu_stats)
self.receiver_results.append(results.receiver_results)
self.receiver_cpu_stats.append(results.receiver_cpu_stats)
else:
raise MeasurementError("Adding incorrect results.")
+class BaseFlowMeasurement(BaseMeasurement):
- @classmethod
- def report_results(cls, recipe, results):
for flow_results in results:
cls._report_flow_results(recipe, flow_results)
- @classmethod
- def evaluate_results(cls, recipe, results):
#TODO split off into a separate evaluator class
for flow_results in results:
if flow_results.generator_results.average > 0:
recipe.add_result(True, "Generator reported non-zero throughput")
else:
recipe.add_result(False, "Generator reported zero throughput")
if flow_results.receiver_results.average > 0:
recipe.add_result(True, "Receiver reported non-zero throughput")
else:
recipe.add_result(False, "Receiver reported zero throughput")
- @classmethod
- def _report_flow_results(cls, recipe, flow_results):
generator = flow_results.generator_results
generator_cpu = flow_results.generator_cpu_stats
receiver = flow_results.receiver_results
receiver_cpu = flow_results.receiver_cpu_stats
desc = []
desc.append("Generator measured throughput: {tput:.2f} +-{deviation:.2f}({percentage:.2f}%) {unit} per second."
.format(tput=generator.average,
deviation=generator.std_deviation,
percentage=(generator.std_deviation/generator.average) * 100,
unit=generator.unit))
desc.append("Generator process CPU data: {cpu:.2f} +-{cpu_deviation:.2f} {cpu_unit} per second."
.format(cpu=generator_cpu.average,
cpu_deviation=generator_cpu.std_deviation,
cpu_unit=generator_cpu.unit))
desc.append("Receiver measured throughput: {tput:.2f} +-{deviation:.2f}({percentage:.2}%) {unit} per second."
.format(tput=receiver.average,
deviation=receiver.std_deviation,
percentage=(receiver.std_deviation/receiver.average) * 100,
unit=receiver.unit))
desc.append("Receiver process CPU data: {cpu:.2f} +-{cpu_deviation:.2f} {cpu_unit} per second."
.format(cpu=receiver_cpu.average,
cpu_deviation=receiver_cpu.std_deviation,
cpu_unit=receiver_cpu.unit))
#TODO add flow description
recipe.add_result(True, "\n".join(desc), data = dict(
generator_flow_data=generator,
generator_cpu_data=generator_cpu,
receiver_flow_data=receiver,
receiver_cpu_data=receiver_cpu))
- @classmethod
- def aggregate_results(cls, old, new):
aggregated = []
if old is None:
old = [None] * len(new)
for old_flow, new_flow in zip(old, new):
aggregated.append(cls._aggregate_flows(old_flow, new_flow))
return aggregated
- @classmethod
- def _aggregate_flows(cls, old_flow, new_flow):
if old_flow is not None and old_flow.flow is not new_flow.flow:
raise MeasurementError("Aggregating incompatible Flows")
new_result = AggregatedFlowMeasurementResults(new_flow.flow)
new_result.add_results(old_flow)
new_result.add_results(new_flow)
return new_result
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py new file mode 100644 index 0000000..8059308 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py @@ -0,0 +1,29 @@ +class BaseMeasurement(object):
- def __init__(self, conf):
self._conf = conf
- @property
- def conf(self):
return self._conf
- def start(self):
raise NotImplementedError()
- def finish(self):
raise NotImplementedError()
- def collect_results(self):
raise NotImplementedError()
- @classmethod
- def report_results(recipe, results):
raise NotImplementedError()
- @classmethod
- def evaluate_results(recipe, results):
#TODO split off into separate evaluator classes
raise NotImplementedError()
- @classmethod
- def aggregate_results(first, second):
raise NotImplementedError()
diff --git a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py new file mode 100644 index 0000000..c792e9d --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py @@ -0,0 +1,157 @@ +import time
+from lnst.Common.IpAddress import ipaddress
+from lnst.Controller.Recipe import RecipeError +from lnst.Controller.RecipeResults import ResultLevel
+from lnst.RecipeCommon.Perf.Results import PerfInterval +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import BaseFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import FlowMeasurementResults
+from lnst.Tests.Iperf import IperfClient, IperfServer
+class IperfFlowMeasurement(BaseFlowMeasurement):
- def __init__(self, *args):
super(IperfFlowMeasurement, self).__init__(*args)
self._running_measurements = []
self._finished_measurements = []
- def start(self):
if len(self._running_measurements) > 0:
raise MeasurementError("Measurement already running!")
test_flows = self._prepare_test_flows(self._conf)
result = None
for flow in test_flows:
flow.server_job.start(bg=True)
for flow in test_flows:
flow.client_job.start(bg=True)
self._running_measurements = test_flows
- def finish(self):
test_flows = self._running_measurements
try:
for flow in test_flows:
client_iperf = flow.client_job.what
flow.client_job.wait(timeout=client_iperf.runtime_estimate())
flow.server_job.wait(timeout=5)
finally:
for flow in test_flows:
if not flow.server_job.finished:
flow.server_job.kill()
if not flow.client_job.finished:
flow.client_job.kill()
Just wondering if the kill() method could handle the .finished check automatically. In case the job is finished don't do anything.
self._running_measurements = []
self._finished_measurements = test_flows
- def collect_results(self):
test_flows = self._finished_measurements
results = []
for test_flow in test_flows:
flow_results = FlowMeasurementResults(test_flow.flow)
flow_results.generator_results = self._parse_job_streams(
test_flow.client_job)
flow_results.generator_cpu_stats = self._parse_job_cpu(
test_flow.client_job)
flow_results.receiver_results = self._parse_job_streams(
test_flow.server_job)
flow_results.receiver_cpu_stats = self._parse_job_cpu(
test_flow.server_job)
results.append(flow_results)
return results
- def _prepare_test_flows(self, flows):
test_flows = []
for flow in flows:
server_job = self._prepare_server(flow)
client_job = self._prepare_client(flow)
test_flow = NetworkFlowTest(flow, server_job, client_job)
test_flows.append(test_flow)
return test_flows
- def _prepare_server(self, flow):
host = flow.receiver
server_params = dict(bind = ipaddress(flow.receiver_bind),
oneoff = True)
return host.prepare_job(IperfServer(**server_params),
job_level=ResultLevel.NORMAL)
- def _prepare_client(self, flow):
host = flow.generator
client_params = dict(server = ipaddress(flow.receiver_bind),
duration = flow.duration)
if flow.type == "tcp_stream":
#tcp stream is the default for iperf3
pass
elif flow.type == "udp_stream":
client_params["udp"] = True
elif flow.type == "sctp_stream":
client_params["sctp"] = True
else:
raise RecipeError("Unsupported flow type '{}'".format(flow.type))
if flow.parallel_streams > 1:
client_params["parallel"] = flow.parallel_streams
if flow.msg_size:
client_params["blksize"] = flow.msg_size
return host.prepare_job(IperfClient(**client_params),
job_level=ResultLevel.NORMAL)
- def _parse_job_streams(self, job):
result = ParallelPerfResult()
if not job.passed:
result.append(PerfInterval(0, 0, "bits"))
else:
for i in job.result["data"]["end"]["streams"]:
result.append(SequentialPerfResult())
for interval in job.result["data"]["intervals"]:
for i, stream in enumerate(interval["streams"]):
result[i].append(PerfInterval(stream["bytes"] * 8,
stream["seconds"],
"bits"))
return result
- def _parse_job_cpu(self, job):
if not job.passed:
return PerfInterval(0, 0, "cpu_percent")
else:
cpu_percent = job.result["data"]["end"]["cpu_utilization_percent"]["host_total"]
return PerfInterval(cpu_percent, 1, "cpu_percent")
+class NetworkFlowTest(object):
- def __init__(self, flow, server_job, client_job):
self._flow = flow
self._server_job = server_job
self._client_job = client_job
- @property
- def flow(self):
return self._flow
- @property
- def server_job(self):
return self._server_job
- @property
- def client_job(self):
return self._client_job
- @property
- def duration(self):
return self._flow.duration
diff --git a/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py b/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py new file mode 100644 index 0000000..66ed168 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py @@ -0,0 +1,4 @@ +from lnst.Common.LnstError import LnstError
+class MeasurementError(LnstError):
- pass
diff --git a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py new file mode 100644 index 0000000..14e7f73 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py @@ -0,0 +1,88 @@ +import signal
+from lnst.RecipeCommon.Perf.Results import PerfInterval +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult +from lnst.RecipeCommon.Perf.Measurements.BaseCPUMeasurement import BaseCPUMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseCPUMeasurement import CPUMeasurementResults
+from lnst.Tests.CPUStatMonitor import CPUStatMonitor
+class StatCPUMeasurementResults(CPUMeasurementResults):
- def __init__(self, *args):
super(StatCPUMeasurementResults, self).__init__(*args)
self._data = {}
- def update_intervals(self, intervals):
for key, interval in intervals.items():
if key not in self._data:
self._data[key] = SequentialPerfResult()
self._data[key].append(interval)
- @property
- def utilization(self):
return ParallelPerfResult([self._data["user"], self._data["nice"],
self._data["system"], self._data["irq"], self._data["softirq"],
self._data["steal"]])
+class StatCPUMeasurement(BaseCPUMeasurement):
- def __init__(self, *args):
super(StatCPUMeasurement, self).__init__(*args)
self._running_measurements = []
self._finished_measurements = []
- def start(self):
jobs = []
for host in self._conf:
jobs.append(host.run(CPUStatMonitor(interval=1000),bg=True))
self._running_measurements = jobs
- def finish(self):
jobs = self._running_measurements
try:
for job in jobs:
job.kill(signal.SIGINT)
job.wait()
finally:
for job in jobs:
if not job.finished:
job.kill()
Same for the job.finished() + job.kill(). Or does this save an RPC call?
self._running_measurements = []
self._finished_measurements = jobs
- def collect_results(self):
results = []
for job in self._finished_measurements:
job_results = self._process_job(job)
results.extend(job_results)
return results
- def _process_job(self, job):
host = job.host
job_results = {}
for sample in job.result["data"]:
parsed_sample = self._parse_sample(sample)
for cpu, cpu_intervals in parsed_sample.items():
if cpu not in job_results:
job_results[cpu] = StatCPUMeasurementResults(host, cpu)
cpu_results = job_results[cpu]
cpu_results.update_intervals(cpu_intervals)
return job_results.values()
- def _parse_sample(self, sample):
result = {}
duration = sample["duration"]
for key, value in sample.items():
if key.startswith("cpu"):
result[key] = self._create_cpu_intervals(duration, value)
return result
- def _create_cpu_intervals(self, duration, cpu_intervals):
result = {}
for key, value in cpu_intervals.items():
result[key] = PerfInterval(value, duration, "time units")
return result
diff --git a/lnst/RecipeCommon/TRexMeasurementTool.py b/lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py similarity index 100% rename from lnst/RecipeCommon/TRexMeasurementTool.py rename to lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py diff --git a/lnst/RecipeCommon/Perf/Measurements/__init__.py b/lnst/RecipeCommon/Perf/Measurements/__init__.py new file mode 100644 index 0000000..781e641 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/__init__.py @@ -0,0 +1,3 @@ +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import Flow +from lnst.RecipeCommon.Perf.Measurements.IperfFlowMeasurement import IperfFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.StatCPUMeasurement import StatCPUMeasurement diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index a26d999..d7d1aec 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -1,4 +1,3 @@
from lnst.Common.LnstError import LnstError from lnst.Common.Parameters import Param, IntParam, StrParam, BoolParam from lnst.Common.IpAddress import AF_INET, AF_INET6 @@ -8,7 +7,9 @@ from lnst.Controller.Recipe import BaseRecipe from lnst.RecipeCommon.Ping import PingTestAndEvaluate, PingConf from lnst.RecipeCommon.Perf.Recipe import Recipe as PerfRecipe from lnst.RecipeCommon.Perf.Recipe import RecipeConf as PerfRecipeConf -from lnst.RecipeCommon.IperfMeasurementTool import IperfMeasurementTool +from lnst.RecipeCommon.Perf.Measurements import Flow as PerfFlow +from lnst.RecipeCommon.Perf.Measurements import IperfFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements import StatCPUMeasurement
class EnrtConfiguration(object): def __init__(self): @@ -79,14 +80,16 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe):
perf_duration = IntParam(default=60) perf_iterations = IntParam(default=5)
- perf_streams = IntParam(default=1)
perf_parallel_streams = IntParam(default=1) perf_msg_size = IntParam(default=123)
perf_usr_comment = StrParam(default="")
perf_max_deviation = IntParam(default=10) #TODO required?
- perf_tool = Param(default=IperfMeasurementTool)
net_perf_tool = Param(default=IperfFlowMeasurement)
cpu_perf_tool = Param(default=StatCPUMeasurement)
def test(self): main_config = self.test_wide_configuration()
@@ -188,8 +191,22 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe): server_bind = server_nic.ips_filter(family=family)[0]
for perf_test in self.params.perf_tests:
flow = PerfFlow(
type = perf_test,
generator = client_netns,
generator_bind = client_bind,
receiver = server_netns,
receiver_bind = server_bind,
msg_size = self.params.perf_msg_size,
duration = self.params.perf_duration,
parallel_streams = self.params.perf_parallel_streams)
flow_measurement = self.params.net_perf_tool([flow]) yield PerfRecipeConf(
measurements=[ ],
measurements=[
self.params.cpu_perf_tool([client_netns, server_netns]),
flow_measurement
], iterations=self.params.perf_iterations)
def _pin_dev_interrupts(self, dev, cpu):
-- 2.19.1 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/lnst-developers@lists.fedorahos...
On Thu, Nov 15, 2018 at 10:59:56AM +0100, Jan Tluka wrote:
Wed, Nov 14, 2018 at 04:04:47PM CET, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
This is the second part of the refactorization of the PerfAndEvaluate recipe workflow. In generic terms it introduces a new package that will store a class hierarchy for various Measurement types and implementations.
At the base level there is the BaseMeasurement class and module that defines the interface that all the other classes have to implement. This interface is understood and relied upon by the lnst.RecipeCommon.Perf.Recipe class that uses it.
The refactorization includes a move+rename of the IperfMeasurementTool and TRexMeasurementTool into the new IperfFlowMeasurement and TRexMeasurement classes/modules. And the addition of the new StatCPUMeasurement class that uses the CPUStatMonitor test module to measure cpu utilization.
Finally these changes are added to the BaseEnrtRecipe so that everything stays working.
v2:
- fixed typo in IperfFlowMeasurement - the unit for cpu utilization returned by Iperf should be "cpu_percent".
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/RecipeCommon/IperfMeasurementTool.py | 83 ------- .../Perf/Measurements/BaseCPUMeasurement.py | 109 ++++++++++ .../Perf/Measurements/BaseFlowMeasurement.py | 202 ++++++++++++++++++ .../Perf/Measurements/BaseMeasurement.py | 29 +++ .../Perf/Measurements/IperfFlowMeasurement.py | 157 ++++++++++++++ .../Perf/Measurements/MeasurementError.py | 4 + .../Perf/Measurements/StatCPUMeasurement.py | 88 ++++++++ .../Measurements/TRexMeasurement.py} | 0 .../Perf/Measurements/__init__.py | 3 + lnst/Recipes/ENRT/BaseEnrtRecipe.py | 27 ++- 10 files changed, 614 insertions(+), 88 deletions(-) delete mode 100644 lnst/RecipeCommon/IperfMeasurementTool.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/MeasurementError.py create mode 100644 lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py rename lnst/RecipeCommon/{TRexMeasurementTool.py => Perf/Measurements/TRexMeasurement.py} (100%) create mode 100644 lnst/RecipeCommon/Perf/Measurements/__init__.py
diff --git a/lnst/RecipeCommon/IperfMeasurementTool.py b/lnst/RecipeCommon/IperfMeasurementTool.py deleted file mode 100644 index 9f2e49e..0000000 --- a/lnst/RecipeCommon/IperfMeasurementTool.py +++ /dev/null @@ -1,83 +0,0 @@ -import time -import signal -from lnst.Common.IpAddress import ipaddress -from lnst.Controller.Recipe import RecipeError -from lnst.Controller.RecipeResults import ResultLevel -from lnst.RecipeCommon.Perf import PerfConf, PerfMeasurementTool -from lnst.RecipeCommon.PerfResult import PerfInterval, StreamPerf -from lnst.RecipeCommon.PerfResult import MultiStreamPerf -from lnst.Tests.Iperf import IperfClient, IperfServer
-class IperfMeasurementTool(PerfMeasurementTool):
- @staticmethod
- def perf_measure(perf_conf):
_iperf_duration_overhead = 5
server_params = dict(bind = ipaddress(perf_conf.receiver_bind),
oneoff = True)
client_params = dict(server = server_params["bind"],
duration = perf_conf.duration,
parallel = perf_conf.streams)
if perf_conf.test_type == "tcp_stream":
#tcp stream is the default for iperf3
pass
elif perf_conf.test_type == "udp_stream":
client_params["udp"] = True
elif perf_conf.test_type == "sctp_stream":
client_params["sctp"] = True
else:
raise RecipeError("Unsupported test type '{}'"
.format(perf_conf.test_type))
server = IperfServer(**server_params)
client = IperfClient(**client_params)
server_host = perf_conf.receiver
client_host = perf_conf.generator
result = None
try:
server_job = server_host.run(server, bg=True,
job_level=ResultLevel.NORMAL)
#wait for server to start, TODO can this be improved?
time.sleep(2)
duration = client.params.duration + _iperf_duration_overhead
client_job = client_host.run(client, timeout=duration,
job_level=ResultLevel.NORMAL)
server_job.wait(timeout=5)
finally:
if client_job and not client_job.finished:
client_job.kill()
if server_job and not server_job.finished:
server_job.kill()
#TODO return something if not passed
if client_job.passed:
client_result = MultiStreamPerf()
for i in client_job.result["data"]["end"]["streams"]:
client_result.append(StreamPerf())
for interval in client_job.result["data"]["intervals"]:
for i, stream in enumerate(interval["streams"]):
client_result[i].append(PerfInterval(stream["bytes"] * 8,
stream["seconds"],
"bits"))
#TODO return something if not passed
if server_job.passed:
server_result = MultiStreamPerf()
for i in server_job.result["data"]["end"]["streams"]:
server_result.append(StreamPerf())
for interval in server_job.result["data"]["intervals"]:
for i, stream in enumerate(interval["streams"]):
server_result[i].append(PerfInterval(stream["bytes"] * 8,
stream["seconds"],
"bits"))
return client_result, server_result
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py new file mode 100644 index 0000000..2507f3c --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py @@ -0,0 +1,109 @@ +import signal +from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult
+class CPUMeasurementResults(object):
- def __init__(self, host, cpu):
self._host = host
self._cpu = cpu
- @property
- def host(self):
return self._host
- @property
- def cpu(self):
return self._cpu
- @property
- def utilization(self):
raise NotImplementedError()
+class AggregatedCPUMeasurementResults(CPUMeasurementResults):
- def __init__(self, host, cpu):
super(AggregatedCPUMeasurementResults, self).__init__(host, cpu)
self._individual_results = []
- @property
- def individual_results(self):
return self._individual_results
- @property
- def utilization(self):
return SequentialPerfResult([i.utilization
for i in self.individual_results])
- def add_results(self, results):
if results is None:
return
elif isinstance(results, AggregatedCPUMeasurementResults):
self.individual_results.extend(results.individual_results)
elif isinstance(results, CPUMeasurementResults):
self.individual_results.append(results)
else:
raise MeasurementError("Adding incorrect results.")
+class BaseCPUMeasurement(BaseMeasurement):
- @classmethod
- def aggregate_results(cls, old, new):
aggregated = []
if old is None:
old = [None] * len(new)
for old_measurements, new_measurements in zip(old, new):
aggregated.append(cls._aggregate_hostcpu_results(
old_measurements, new_measurements))
return aggregated
- @classmethod
- def report_results(cls, recipe, results):
results_by_host = cls._divide_results_by_host(results)
for host_results in results_by_host.values():
cls._report_host_results(recipe, host_results)
- @classmethod
- def evaluate_results(cls, recipe, results):
#TODO split off into a separate evaluator class
for result in results:
recipe.add_result(True,
"Base CPU evaluation for host {}, cpu {}".format(
result.host.hostid, result.cpu))
- @classmethod
- def _divide_results_by_host(cls, results):
results_by_host = {}
for result in results:
if result.host not in results_by_host:
results_by_host[result.host] = []
results_by_host[result.host].append(result)
return results_by_host
- @classmethod
- def _report_host_results(cls, recipe, results):
if not len(results):
return
cpu_data = {}
desc = ["CPU Utilization on host {host}:".format(
host=results[0].host.hostid)]
for result in results:
utilization = result.utilization
cpu_data[result.cpu] = utilization
desc.append("cpu '{cpu}': {average:.2f} +-{deviation:.2f} {unit} per second"
.format(cpu=result.cpu,
average=utilization.average,
deviation=utilization.std_deviation,
unit=utilization.unit))
recipe.add_result(True, "\n".join(desc), data=cpu_data)
- @classmethod
- def _aggregate_hostcpu_results(cls, old, new):
if (old is not None and
(old.host is not new.host or old.cpu != new.cpu)):
raise MeasurementError("Aggregating incompatible CPU Results")
new_result = AggregatedCPUMeasurementResults(new.host, new.cpu)
new_result.add_results(old)
new_result.add_results(new)
return new_result
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py new file mode 100644 index 0000000..203e104 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py @@ -0,0 +1,202 @@ +import signal +from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult
+class Flow(object):
- def __init__(self,
type,
generator, generator_bind,
receiver, receiver_bind,
msg_size, duration, parallel_streams):
self._type = type
self._generator = generator
self._generator_bind = generator_bind
self._receiver = receiver
self._receiver_bind = receiver_bind
self._msg_size = msg_size
self._duration = duration
self._parallel_streams = parallel_streams
- @property
- def type(self):
return self._type
- @property
- def generator(self):
return self._generator
- @property
- def generator_bind(self):
return self._generator_bind
- @property
- def receiver(self):
return self._receiver
- @property
- def receiver_bind(self):
return self._receiver_bind
- @property
- def msg_size(self):
return self._msg_size
- @property
- def duration(self):
return self._duration
- @property
- def parallel_streams(self):
return self._parallel_streams
+class FlowMeasurementResults(object):
- def __init__(self, flow):
self._flow = flow
self._generator_results = None
self._generator_cpu_stats = None
self._receiver_results = None
self._receiver_cpu_stats = None
- @property
- def flow(self):
return self._flow
- @property
- def generator_results(self):
return self._generator_results
- @generator_results.setter
- def generator_results(self, value):
self._generator_results = value
- @property
- def generator_cpu_stats(self):
return self._generator_cpu_stats
- @generator_cpu_stats.setter
- def generator_cpu_stats(self, value):
self._generator_cpu_stats = value
- @property
- def receiver_results(self):
return self._receiver_results
- @receiver_results.setter
- def receiver_results(self, value):
self._receiver_results = value
- @property
- def receiver_cpu_stats(self):
return self._receiver_cpu_stats
- @receiver_cpu_stats.setter
- def receiver_cpu_stats(self, value):
self._receiver_cpu_stats = value
+class AggregatedFlowMeasurementResults(FlowMeasurementResults):
- def __init__(self, flow):
self._flow = flow
self._generator_results = SequentialPerfResult()
self._generator_cpu_stats = SequentialPerfResult()
self._receiver_results = SequentialPerfResult()
self._receiver_cpu_stats = SequentialPerfResult()
self._individual_results = []
- @property
- def individual_results(self):
return self._individual_results
- def add_results(self, results):
if results is None:
return
elif isinstance(results, AggregatedFlowMeasurementResults):
self.individual_results.extend(results.individual_results)
self.generator_results.extend(results.generator_results)
self.generator_cpu_stats.extend(results.generator_cpu_stats)
self.receiver_results.extend(results.receiver_results)
self.receiver_cpu_stats.extend(results.receiver_cpu_stats)
elif isinstance(results, FlowMeasurementResults):
self.individual_results.append(results)
^^^^^^^
Should not this be append(results.individual_results) ?
No, because, in this branch, results is a FlowMeasurementResults object which doesn't have the individual_results attribute. That's only in the AggregatedFlowMeasurementResults class.
self.generator_results.append(results.generator_results)
self.generator_cpu_stats.append(results.generator_cpu_stats)
self.receiver_results.append(results.receiver_results)
self.receiver_cpu_stats.append(results.receiver_cpu_stats)
else:
raise MeasurementError("Adding incorrect results.")
+class BaseFlowMeasurement(BaseMeasurement):
- @classmethod
- def report_results(cls, recipe, results):
for flow_results in results:
cls._report_flow_results(recipe, flow_results)
- @classmethod
- def evaluate_results(cls, recipe, results):
#TODO split off into a separate evaluator class
for flow_results in results:
if flow_results.generator_results.average > 0:
recipe.add_result(True, "Generator reported non-zero throughput")
else:
recipe.add_result(False, "Generator reported zero throughput")
if flow_results.receiver_results.average > 0:
recipe.add_result(True, "Receiver reported non-zero throughput")
else:
recipe.add_result(False, "Receiver reported zero throughput")
- @classmethod
- def _report_flow_results(cls, recipe, flow_results):
generator = flow_results.generator_results
generator_cpu = flow_results.generator_cpu_stats
receiver = flow_results.receiver_results
receiver_cpu = flow_results.receiver_cpu_stats
desc = []
desc.append("Generator measured throughput: {tput:.2f} +-{deviation:.2f}({percentage:.2f}%) {unit} per second."
.format(tput=generator.average,
deviation=generator.std_deviation,
percentage=(generator.std_deviation/generator.average) * 100,
unit=generator.unit))
desc.append("Generator process CPU data: {cpu:.2f} +-{cpu_deviation:.2f} {cpu_unit} per second."
.format(cpu=generator_cpu.average,
cpu_deviation=generator_cpu.std_deviation,
cpu_unit=generator_cpu.unit))
desc.append("Receiver measured throughput: {tput:.2f} +-{deviation:.2f}({percentage:.2}%) {unit} per second."
.format(tput=receiver.average,
deviation=receiver.std_deviation,
percentage=(receiver.std_deviation/receiver.average) * 100,
unit=receiver.unit))
desc.append("Receiver process CPU data: {cpu:.2f} +-{cpu_deviation:.2f} {cpu_unit} per second."
.format(cpu=receiver_cpu.average,
cpu_deviation=receiver_cpu.std_deviation,
cpu_unit=receiver_cpu.unit))
#TODO add flow description
recipe.add_result(True, "\n".join(desc), data = dict(
generator_flow_data=generator,
generator_cpu_data=generator_cpu,
receiver_flow_data=receiver,
receiver_cpu_data=receiver_cpu))
- @classmethod
- def aggregate_results(cls, old, new):
aggregated = []
if old is None:
old = [None] * len(new)
for old_flow, new_flow in zip(old, new):
aggregated.append(cls._aggregate_flows(old_flow, new_flow))
return aggregated
- @classmethod
- def _aggregate_flows(cls, old_flow, new_flow):
if old_flow is not None and old_flow.flow is not new_flow.flow:
raise MeasurementError("Aggregating incompatible Flows")
new_result = AggregatedFlowMeasurementResults(new_flow.flow)
new_result.add_results(old_flow)
new_result.add_results(new_flow)
return new_result
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py new file mode 100644 index 0000000..8059308 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py @@ -0,0 +1,29 @@ +class BaseMeasurement(object):
- def __init__(self, conf):
self._conf = conf
- @property
- def conf(self):
return self._conf
- def start(self):
raise NotImplementedError()
- def finish(self):
raise NotImplementedError()
- def collect_results(self):
raise NotImplementedError()
- @classmethod
- def report_results(recipe, results):
raise NotImplementedError()
- @classmethod
- def evaluate_results(recipe, results):
#TODO split off into separate evaluator classes
raise NotImplementedError()
- @classmethod
- def aggregate_results(first, second):
raise NotImplementedError()
diff --git a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py new file mode 100644 index 0000000..c792e9d --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py @@ -0,0 +1,157 @@ +import time
+from lnst.Common.IpAddress import ipaddress
+from lnst.Controller.Recipe import RecipeError +from lnst.Controller.RecipeResults import ResultLevel
+from lnst.RecipeCommon.Perf.Results import PerfInterval +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import BaseFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import FlowMeasurementResults
+from lnst.Tests.Iperf import IperfClient, IperfServer
+class IperfFlowMeasurement(BaseFlowMeasurement):
- def __init__(self, *args):
super(IperfFlowMeasurement, self).__init__(*args)
self._running_measurements = []
self._finished_measurements = []
- def start(self):
if len(self._running_measurements) > 0:
raise MeasurementError("Measurement already running!")
test_flows = self._prepare_test_flows(self._conf)
result = None
for flow in test_flows:
flow.server_job.start(bg=True)
for flow in test_flows:
flow.client_job.start(bg=True)
self._running_measurements = test_flows
- def finish(self):
test_flows = self._running_measurements
try:
for flow in test_flows:
client_iperf = flow.client_job.what
flow.client_job.wait(timeout=client_iperf.runtime_estimate())
flow.server_job.wait(timeout=5)
finally:
for flow in test_flows:
if not flow.server_job.finished:
flow.server_job.kill()
if not flow.client_job.finished:
flow.client_job.kill()
Just wondering if the kill() method could handle the .finished check automatically. In case the job is finished don't do anything.
Ok, that sounds reasonable, but since this is a generic lnst.Controller.Job API change, I'll do that as a separate change from this patch.
self._running_measurements = []
self._finished_measurements = test_flows
- def collect_results(self):
test_flows = self._finished_measurements
results = []
for test_flow in test_flows:
flow_results = FlowMeasurementResults(test_flow.flow)
flow_results.generator_results = self._parse_job_streams(
test_flow.client_job)
flow_results.generator_cpu_stats = self._parse_job_cpu(
test_flow.client_job)
flow_results.receiver_results = self._parse_job_streams(
test_flow.server_job)
flow_results.receiver_cpu_stats = self._parse_job_cpu(
test_flow.server_job)
results.append(flow_results)
return results
- def _prepare_test_flows(self, flows):
test_flows = []
for flow in flows:
server_job = self._prepare_server(flow)
client_job = self._prepare_client(flow)
test_flow = NetworkFlowTest(flow, server_job, client_job)
test_flows.append(test_flow)
return test_flows
- def _prepare_server(self, flow):
host = flow.receiver
server_params = dict(bind = ipaddress(flow.receiver_bind),
oneoff = True)
return host.prepare_job(IperfServer(**server_params),
job_level=ResultLevel.NORMAL)
- def _prepare_client(self, flow):
host = flow.generator
client_params = dict(server = ipaddress(flow.receiver_bind),
duration = flow.duration)
if flow.type == "tcp_stream":
#tcp stream is the default for iperf3
pass
elif flow.type == "udp_stream":
client_params["udp"] = True
elif flow.type == "sctp_stream":
client_params["sctp"] = True
else:
raise RecipeError("Unsupported flow type '{}'".format(flow.type))
if flow.parallel_streams > 1:
client_params["parallel"] = flow.parallel_streams
if flow.msg_size:
client_params["blksize"] = flow.msg_size
return host.prepare_job(IperfClient(**client_params),
job_level=ResultLevel.NORMAL)
- def _parse_job_streams(self, job):
result = ParallelPerfResult()
if not job.passed:
result.append(PerfInterval(0, 0, "bits"))
else:
for i in job.result["data"]["end"]["streams"]:
result.append(SequentialPerfResult())
for interval in job.result["data"]["intervals"]:
for i, stream in enumerate(interval["streams"]):
result[i].append(PerfInterval(stream["bytes"] * 8,
stream["seconds"],
"bits"))
return result
- def _parse_job_cpu(self, job):
if not job.passed:
return PerfInterval(0, 0, "cpu_percent")
else:
cpu_percent = job.result["data"]["end"]["cpu_utilization_percent"]["host_total"]
return PerfInterval(cpu_percent, 1, "cpu_percent")
+class NetworkFlowTest(object):
- def __init__(self, flow, server_job, client_job):
self._flow = flow
self._server_job = server_job
self._client_job = client_job
- @property
- def flow(self):
return self._flow
- @property
- def server_job(self):
return self._server_job
- @property
- def client_job(self):
return self._client_job
- @property
- def duration(self):
return self._flow.duration
diff --git a/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py b/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py new file mode 100644 index 0000000..66ed168 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/MeasurementError.py @@ -0,0 +1,4 @@ +from lnst.Common.LnstError import LnstError
+class MeasurementError(LnstError):
- pass
diff --git a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py new file mode 100644 index 0000000..14e7f73 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py @@ -0,0 +1,88 @@ +import signal
+from lnst.RecipeCommon.Perf.Results import PerfInterval +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult +from lnst.RecipeCommon.Perf.Measurements.BaseCPUMeasurement import BaseCPUMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseCPUMeasurement import CPUMeasurementResults
+from lnst.Tests.CPUStatMonitor import CPUStatMonitor
+class StatCPUMeasurementResults(CPUMeasurementResults):
- def __init__(self, *args):
super(StatCPUMeasurementResults, self).__init__(*args)
self._data = {}
- def update_intervals(self, intervals):
for key, interval in intervals.items():
if key not in self._data:
self._data[key] = SequentialPerfResult()
self._data[key].append(interval)
- @property
- def utilization(self):
return ParallelPerfResult([self._data["user"], self._data["nice"],
self._data["system"], self._data["irq"], self._data["softirq"],
self._data["steal"]])
+class StatCPUMeasurement(BaseCPUMeasurement):
- def __init__(self, *args):
super(StatCPUMeasurement, self).__init__(*args)
self._running_measurements = []
self._finished_measurements = []
- def start(self):
jobs = []
for host in self._conf:
jobs.append(host.run(CPUStatMonitor(interval=1000),bg=True))
self._running_measurements = jobs
- def finish(self):
jobs = self._running_measurements
try:
for job in jobs:
job.kill(signal.SIGINT)
job.wait()
finally:
for job in jobs:
if not job.finished:
job.kill()
Same for the job.finished() + job.kill(). Or does this save an RPC call?
self._running_measurements = []
self._finished_measurements = jobs
- def collect_results(self):
results = []
for job in self._finished_measurements:
job_results = self._process_job(job)
results.extend(job_results)
return results
- def _process_job(self, job):
host = job.host
job_results = {}
for sample in job.result["data"]:
parsed_sample = self._parse_sample(sample)
for cpu, cpu_intervals in parsed_sample.items():
if cpu not in job_results:
job_results[cpu] = StatCPUMeasurementResults(host, cpu)
cpu_results = job_results[cpu]
cpu_results.update_intervals(cpu_intervals)
return job_results.values()
- def _parse_sample(self, sample):
result = {}
duration = sample["duration"]
for key, value in sample.items():
if key.startswith("cpu"):
result[key] = self._create_cpu_intervals(duration, value)
return result
- def _create_cpu_intervals(self, duration, cpu_intervals):
result = {}
for key, value in cpu_intervals.items():
result[key] = PerfInterval(value, duration, "time units")
return result
diff --git a/lnst/RecipeCommon/TRexMeasurementTool.py b/lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py similarity index 100% rename from lnst/RecipeCommon/TRexMeasurementTool.py rename to lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py diff --git a/lnst/RecipeCommon/Perf/Measurements/__init__.py b/lnst/RecipeCommon/Perf/Measurements/__init__.py new file mode 100644 index 0000000..781e641 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/__init__.py @@ -0,0 +1,3 @@ +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import Flow +from lnst.RecipeCommon.Perf.Measurements.IperfFlowMeasurement import IperfFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.StatCPUMeasurement import StatCPUMeasurement diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index a26d999..d7d1aec 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -1,4 +1,3 @@
from lnst.Common.LnstError import LnstError from lnst.Common.Parameters import Param, IntParam, StrParam, BoolParam from lnst.Common.IpAddress import AF_INET, AF_INET6 @@ -8,7 +7,9 @@ from lnst.Controller.Recipe import BaseRecipe from lnst.RecipeCommon.Ping import PingTestAndEvaluate, PingConf from lnst.RecipeCommon.Perf.Recipe import Recipe as PerfRecipe from lnst.RecipeCommon.Perf.Recipe import RecipeConf as PerfRecipeConf -from lnst.RecipeCommon.IperfMeasurementTool import IperfMeasurementTool +from lnst.RecipeCommon.Perf.Measurements import Flow as PerfFlow +from lnst.RecipeCommon.Perf.Measurements import IperfFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements import StatCPUMeasurement
class EnrtConfiguration(object): def __init__(self): @@ -79,14 +80,16 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe):
perf_duration = IntParam(default=60) perf_iterations = IntParam(default=5)
- perf_streams = IntParam(default=1)
perf_parallel_streams = IntParam(default=1) perf_msg_size = IntParam(default=123)
perf_usr_comment = StrParam(default="")
perf_max_deviation = IntParam(default=10) #TODO required?
- perf_tool = Param(default=IperfMeasurementTool)
net_perf_tool = Param(default=IperfFlowMeasurement)
cpu_perf_tool = Param(default=StatCPUMeasurement)
def test(self): main_config = self.test_wide_configuration()
@@ -188,8 +191,22 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe): server_bind = server_nic.ips_filter(family=family)[0]
for perf_test in self.params.perf_tests:
flow = PerfFlow(
type = perf_test,
generator = client_netns,
generator_bind = client_bind,
receiver = server_netns,
receiver_bind = server_bind,
msg_size = self.params.perf_msg_size,
duration = self.params.perf_duration,
parallel_streams = self.params.perf_parallel_streams)
flow_measurement = self.params.net_perf_tool([flow]) yield PerfRecipeConf(
measurements=[ ],
measurements=[
self.params.cpu_perf_tool([client_netns, server_netns]),
flow_measurement
], iterations=self.params.perf_iterations)
def _pin_dev_interrupts(self, dev, cpu):
-- 2.19.1 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/lnst-developers@lists.fedorahos...
From: Ondrej Lichtner olichtne@redhat.com
No reason to use a short hand... the object will accept multiline descriptions anyway and the SummaryFormatter should be able to deal with that.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RecipeResults.py | 20 ++++++++++---------- lnst/Controller/RunSummaryFormatter.py | 2 +- 2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/lnst/Controller/RecipeResults.py b/lnst/Controller/RecipeResults.py index 05ce5fb..d19d6e8 100644 --- a/lnst/Controller/RecipeResults.py +++ b/lnst/Controller/RecipeResults.py @@ -38,7 +38,7 @@ class BaseResult(object): return self._success
@property - def short_desc(self): + def description(self): return "Short description of result if relevant"
@property @@ -76,8 +76,8 @@ class JobResult(BaseResult):
class JobStartResult(JobResult): """Generated automatically when a Job is succesfully started on a slave""" - @BaseResult.short_desc.getter - def short_desc(self): + @BaseResult.description.getter + def description(self): return "Job started: {}".format(str(self.job))
class JobFinishResult(JobResult): @@ -92,8 +92,8 @@ class JobFinishResult(JobResult): def success(self): return self._job.passed
- @BaseResult.short_desc.getter - def short_desc(self): + @BaseResult.description.getter + def description(self): return "Job finished: {}".format(str(self.job))
@BaseResult.data.getter @@ -105,11 +105,11 @@ class Result(BaseResult):
Will be created when the tester calls the Recipe interface for adding results.""" - def __init__(self, success, short_desc="", data=None, + def __init__(self, success, description="", data=None, level=None, data_level=None): super(Result, self).__init__(success)
- self._short_desc = short_desc + self._description = description self._data = data self._level = (level if isinstance(level, ResultLevel) @@ -118,9 +118,9 @@ class Result(BaseResult): if isinstance(data_level, ResultLevel) else ResultLevel.IMPORTANT+1)
- @BaseResult.short_desc.getter - def short_desc(self): - return self._short_desc + @BaseResult.description.getter + def description(self): + return self._description
@BaseResult.data.getter def data(self): diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index a90efe4..ea9a6dd 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -105,7 +105,7 @@ class RunSummaryFormatter(object): output_lines.append("{res} {src}\t{desc}".format( res = self._format_success(res.success), src = self._format_source(res), - desc = res.short_desc)) + desc = res.description)
if res.data_level <= self._level: output_lines.extend(self._format_data(res.data))
From: Ondrej Lichtner olichtne@redhat.com
If a result description is multiline it should be added below the header and the lines should be indented.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RunSummaryFormatter.py | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index ea9a6dd..670c1f7 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -11,6 +11,7 @@ __author__ = """ olichtne@redhat.com (Ondrej Lichtner) """
+from lnst.Common.Utils import indent from lnst.Common.Colours import decorate_with_preset from lnst.Controller.Common import ControllerError from lnst.Controller.MachineMapper import format_match_description @@ -102,10 +103,12 @@ class RunSummaryFormatter(object): except IndexError: pass
- output_lines.append("{res} {src}\t{desc}".format( + output_lines.append("{res} {src}{desc}".format( res = self._format_success(res.success), src = self._format_source(res), - desc = res.description) + desc = ("\t{}".format(res.description) + if res.description.count('\n') == 0 + else "\n{}".format(indent(res.description, 4)))))
if res.data_level <= self._level: output_lines.extend(self._format_data(res.data))
From: Ondrej Lichtner olichtne@redhat.com
Move the NetworkFlowTest class from the IperfFlowMeasurement module to the more generic BaseFlowMeasurement module from where it can be reused by other derived classes.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .../Perf/Measurements/BaseFlowMeasurement.py | 18 +++++++++++++++ .../Perf/Measurements/IperfFlowMeasurement.py | 23 +------------------ 2 files changed, 19 insertions(+), 22 deletions(-)
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py index 203e104..da0b5f4 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py @@ -52,6 +52,24 @@ class Flow(object): def parallel_streams(self): return self._parallel_streams
+class NetworkFlowTest(object): + def __init__(self, flow, server_job, client_job): + self._flow = flow + self._server_job = server_job + self._client_job = client_job + + @property + def flow(self): + return self._flow + + @property + def server_job(self): + return self._server_job + + @property + def client_job(self): + return self._client_job + class FlowMeasurementResults(object): def __init__(self, flow): self._flow = flow diff --git a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py index c792e9d..5267ae2 100644 --- a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py @@ -8,6 +8,7 @@ from lnst.Controller.RecipeResults import ResultLevel from lnst.RecipeCommon.Perf.Results import PerfInterval from lnst.RecipeCommon.Perf.Results import SequentialPerfResult from lnst.RecipeCommon.Perf.Results import ParallelPerfResult +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import NetworkFlowTest from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import BaseFlowMeasurement from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import FlowMeasurementResults
@@ -133,25 +134,3 @@ class IperfFlowMeasurement(BaseFlowMeasurement): else: cpu_percent = job.result["data"]["end"]["cpu_utilization_percent"]["host_total"] return PerfInterval(cpu_percent, 1, "cpu_percent") - -class NetworkFlowTest(object): - def __init__(self, flow, server_job, client_job): - self._flow = flow - self._server_job = server_job - self._client_job = client_job - - @property - def flow(self): - return self._flow - - @property - def server_job(self): - return self._server_job - - @property - def client_job(self): - return self._client_job - - @property - def duration(self): - return self._flow.duration
From: Ondrej Lichtner olichtne@redhat.com
This is used to specify which cores (subset of the coremask parameter) should be used by the testpmd application for execution of the pmd threads.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/TestPMD.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/lnst/Tests/TestPMD.py b/lnst/Tests/TestPMD.py index dd26c88..ef975bb 100644 --- a/lnst/Tests/TestPMD.py +++ b/lnst/Tests/TestPMD.py @@ -7,6 +7,7 @@ from lnst.Tests.BaseTestModule import BaseTestModule, TestModuleError
class TestPMD(BaseTestModule): coremask = StrParam(mandatory=True) + pmd_coremask = StrParam(mandatory=True)
#TODO make ListParam nics = Param(mandatory=True) @@ -19,7 +20,8 @@ class TestPMD(BaseTestModule): for nic in self.params.nics: testpmd_args.extend(["-w", nic])
- testpmd_args.extend(["--", "-i", "--forward-mode", "mac"]) + testpmd_args.extend(["--", "-i", "--forward-mode", "mac", + "--coremask", self.params.pmd_coremask])
for i, mac in enumerate(self.params.peer_macs): testpmd_args.extend(["--eth-peer", "{},{}".format(i, mac)]) @@ -29,6 +31,7 @@ class TestPMD(BaseTestModule):
def run(self): cmd = self.format_command() + logging.debug("Running command "{}" as subprocess".format(cmd)) process = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
From: Ondrej Lichtner olichtne@redhat.com
Adding the same changes that were made on the master branch to the original ovs_dpdk_pvp.xml recipe to significantly improve the stability of measured results. This includes: * fixing use of hugepages by the guest * setting hugepages as memory backing * increasing the number of hugepages on host2 to enable full memory backing of the guest RAM * configuring pmd coremask of testpmd in the guest - this tells which cpus should the testpmd process use for pmd threads that handle NICs
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Recipes/ENRT/OvS_DPDK_PvP.py | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/lnst/Recipes/ENRT/OvS_DPDK_PvP.py b/lnst/Recipes/ENRT/OvS_DPDK_PvP.py index 1f963f6..860f060 100644 --- a/lnst/Recipes/ENRT/OvS_DPDK_PvP.py +++ b/lnst/Recipes/ENRT/OvS_DPDK_PvP.py @@ -60,12 +60,13 @@ class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): guest_cpus = StrParam(mandatory=True) guest_emulatorpin_cpu = StrParam(mandatory=True) guest_dpdk_cores = StrParam(mandatory=True) + guest_testpmd_cores = StrParam(mandatory=True) guest_mem_size = IntParam(default=16777216)
host1_dpdk_cores = StrParam(mandatory=True) host2_pmd_cores = StrParam(mandatory=True) host2_l_cores = StrParam(mandatory=True) - nr_hugepages = IntParam(default=2048) + nr_hugepages = IntParam(default=13000) socket_mem = IntParam(default=2048)
dev_intr_cpu = IntParam(default=0) @@ -164,7 +165,8 @@ class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfTestAndEvaluate):
config.guest.testpmd = guest.run( TestPMD( - coremask=self.params.guest_dpdk_cores, + coremask=self.params.guest_testpmd_cores, + pmd_coremask=self.params.guest_dpdk_cores, nics=[nic.bus_info for nic in config.guest.nics], peer_macs=[nic.hwaddr for nic in config.generator.nics]), bg=True) @@ -318,6 +320,10 @@ class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): "emulatorpin", cpuset=str(self.params.guest_emulatorpin_cpu))
+ memoryBacking = ET.SubElement(guest_xml, "memoryBacking") + hugepages = ET.SubElement(memoryBacking, "hugepages") + ET.SubElement(hugepages, "page", size="2", unit="M", nodeset="0") + return guest_xml
def ovs_dpdk_bridge_vm_configuration(self, host_conf, guest_conf):
From: Ondrej Lichtner olichtne@redhat.com
First of all this includes the reimplementation of the TRexMeasurement module and class into the TRexFlowMeasurement class that implements the BaseFlowMeasurement API and can be easily plugged into Perf.Recipe as a measurement. That said, it does have some specific restrictions specific to TRex: * it still requires the trex_dir parameter telling it where to look for the TRex application. * the measurement is port based but the configuration is flow based. Each port currently supports generation of a single flow so that's what is expected on the configuration part. However results are reported per port (with association to the generated flow). It's important to note that while the "tx_rate" statistics represent the generated flow, the "rx_rate" statistics only talk about received packets regardles of which flow they belong to.
The OvSDPDKPvPRecipe class was updated to work with the redesigned implementation of the PerfRecipe base class and it's methods for measurements and reporting results.
The OvSDPDKPvPRecipe now also requests a StatCPUMeasurement measurement for all the hosts involved in the test.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .../Perf/Measurements/TRexFlowMeasurement.py | 151 ++++++++++++++++++ .../Perf/Measurements/TRexMeasurement.py | 87 ---------- .../Perf/Measurements/__init__.py | 1 + lnst/Recipes/ENRT/OvS_DPDK_PvP.py | 46 ++++-- 4 files changed, 185 insertions(+), 100 deletions(-) create mode 100644 lnst/RecipeCommon/Perf/Measurements/TRexFlowMeasurement.py delete mode 100644 lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py
diff --git a/lnst/RecipeCommon/Perf/Measurements/TRexFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/TRexFlowMeasurement.py new file mode 100644 index 0000000..a873577 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Measurements/TRexFlowMeasurement.py @@ -0,0 +1,151 @@ +import time +import signal +from lnst.Controller.Recipe import RecipeError +from lnst.Controller.RecipeResults import ResultLevel + +from lnst.RecipeCommon.Perf.Results import PerfInterval +from lnst.RecipeCommon.Perf.Results import SequentialPerfResult +from lnst.RecipeCommon.Perf.Results import ParallelPerfResult + +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import BaseFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import NetworkFlowTest +from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import FlowMeasurementResults + +from lnst.Tests.TRex import TRexServer, TRexClient + +class TRexFlowMeasurement(BaseFlowMeasurement): + def __init__(self, flows, trex_dir): + self._flows = flows + self._trex_dir = trex_dir + self._running_measurements = [] + self._finished_measurements = [] + + def start(self): + if len(self._running_measurements) > 0: + raise MeasurementError("Measurement already running!") + + tests = self._prepare_tests(self._flows) + + result = None + for test in tests: + test.server_job.start(bg=True) + + for test in tests: + test.client_job.start(bg=True) + + self._running_measurements = tests + + def finish(self): + tests = self._running_measurements + try: + for test in tests: + client_test = test.client_job.what + test.client_job.wait(timeout=client_test.runtime_estimate()) + + test.server_job.kill(signal.SIGINT) + test.server_job.wait(5) + finally: + for test in tests: + if not test.server_job.finished: + test.server_job.kill() + if not test.client_job.finished: + test.client_job.kill() + + self._running_measurements = [] + self._finished_measurements = tests + + def _prepare_tests(self, flows): + tests = [] + + flows_by_generator = self._flows_by_generator(flows) + for generator, flows in flows_by_generator.items(): + flow_tuples = [(flow.generator_bind, flow.receiver_bind) + for flow in flows] + server_job = generator.prepare_job( + TRexServer( + trex_dir=self._trex_dir, + flows=flow_tuples, + cores=["2", "3", "4"])) + client_job = generator.prepare_job( + TRexClient( + trex_dir=self._trex_dir, + ports=range(len(flow_tuples)), + flows=flow_tuples, + duration=flows[0].duration, + msg_size=flows[0].msg_size)) + + test = NetworkFlowTest(flows, server_job, client_job) + tests.append(test) + return tests + + def collect_results(self): + tests = self._finished_measurements + + results = [] + for test in tests: + for port, flow in enumerate(test.flow): + flow_results = self._parse_results_by_port( + test.client_job, port, flow) + results.append(flow_results) + + return results + + def _flows_by_generator(self, flows): + result = dict() + for flow in flows: + if flow.generator in result: + result[flow.generator].append(flow) + else: + result[flow.generator] = [flow] + + for generator, flows in result.items(): + for flow in flows: + if (flow.duration != flows[0].duration or + flow.msg_size != flows[0].msg_size): + raise MeasurementError("Flows on the same generator need to have the same duration and msg_size at the moment") + return result + + def _parse_results_by_port(self, job, port, flow): + results = FlowMeasurementResults(flow) + results.generator_results = SequentialPerfResult() + results.generator_cpu_stats = SequentialPerfResult() + + results.receiver_results = SequentialPerfResult() + results.receiver_cpu_stats = SequentialPerfResult() + + if not job.passed: + results.generator_results.append(PerfInterval(0, 0, "packets")) + results.generator_cpu.append(PerfInterval(0, 0, "cpu_percent")) + results.receiver_results.append(PerfInterval(0, 0, "packets")) + results.receiver_cpu.append(PerfInterval(0, 0, "cpu_percent")) + else: + prev_time = job.result["start_time"] + prev_tx_val = 0 + prev_rx_val = 0 + for i in job.result["data"]: + time_delta = i["timestamp"] - prev_time + tx_delta = i["measurement"][port]["opackets"] - prev_tx_val + rx_delta = i["measurement"][port]["ipackets"] - prev_rx_val + results.generator_results.append(PerfInterval( + tx_delta, + time_delta, + "pkts")) + results.receiver_results.append(PerfInterval( + rx_delta, + time_delta, + "pkts")) + + prev_time = i["timestamp"] + prev_tx_val = i["measurement"][port]["opackets"] + prev_rx_val = i["measurement"][port]["ipackets"] + + cpu_delta = i["measurement"]["global"]["cpu_util"] + results.generator_cpu_stats.append(PerfInterval( + cpu_delta, + time_delta, + "cpu_percent")) + results.receiver_cpu_stats.append(PerfInterval( + cpu_delta, + time_delta, + "cpu_percent")) + return results diff --git a/lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py deleted file mode 100644 index 96abdc2..0000000 --- a/lnst/RecipeCommon/Perf/Measurements/TRexMeasurement.py +++ /dev/null @@ -1,87 +0,0 @@ -import time -import signal -import logging -from lnst.Common.IpAddress import ipaddress -from lnst.Controller.Recipe import RecipeError -from lnst.Controller.RecipeResults import ResultLevel -from lnst.RecipeCommon.Perf import PerfConf, PerfMeasurementTool -from lnst.RecipeCommon.PerfResult import PerfInterval, StreamPerf -from lnst.RecipeCommon.PerfResult import MultiStreamPerf - -from lnst.Tests.TRex import TRexServer, TRexClient - -class TRexMeasurementTool(PerfMeasurementTool): - def __init__(self, trex_dir): - self._trex_dir = trex_dir - - def perf_measure(self, perf_conf): - generator = perf_conf.generator - - flows = [] - for src, dst in zip(perf_conf.generator_bind, perf_conf.receiver_bind): - flows.append(( - dict(mac_addr=src.hwaddr, - pci_addr=src.bus_info, - ip_addr=src.ips[0]), - dict(mac_addr=dst.hwaddr, - pci_addr=dst.bus_info, - ip_addr=dst.ips[0]))) - - try: - server = generator.run( - TRexServer( - trex_dir=self._trex_dir, - flows=flows, - cores=["2", "3", "4"]), - bg=True) - - #wait for server to start up - #TODO better options?? - time.sleep(5) - - test = TRexClient( - trex_dir=self._trex_dir, - ports=range(len(flows)), - flows=flows, - duration=perf_conf.duration, - msg_size=perf_conf.msg_size) - client = generator.run( - test, - timeout=test.runtime_estimate()) - finally: - server.kill(signal.SIGINT) - if not server.wait(5): - server.kill(signal.SIGKILL) - - client_result = None - if client.passed: - tx_result = MultiStreamPerf() - rx_result = MultiStreamPerf() - for port in range(len(flows)): - tx_stream = StreamPerf() - rx_stream = StreamPerf() - - prev_time = client.result["start_time"] - prev_tx_val = 0 - prev_rx_val = 0 - for i in client.result["data"]: - time_delta = i["timestamp"] - prev_time - tx_delta = i["measurement"][port]["opackets"] - prev_tx_val - rx_delta = i["measurement"][port]["ipackets"] - prev_rx_val - tx_stream.append(PerfInterval( - tx_delta, - time_delta, - "pkts")) - rx_stream.append(PerfInterval( - rx_delta, - time_delta, - "pkts")) - - prev_time = i["timestamp"] - prev_tx_val = i["measurement"][port]["opackets"] - prev_rx_val = i["measurement"][port]["ipackets"] - - tx_result.append(tx_stream) - rx_result.append(rx_stream) - - return tx_result, rx_result diff --git a/lnst/RecipeCommon/Perf/Measurements/__init__.py b/lnst/RecipeCommon/Perf/Measurements/__init__.py index 781e641..0b98cd6 100644 --- a/lnst/RecipeCommon/Perf/Measurements/__init__.py +++ b/lnst/RecipeCommon/Perf/Measurements/__init__.py @@ -1,3 +1,4 @@ from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import Flow from lnst.RecipeCommon.Perf.Measurements.IperfFlowMeasurement import IperfFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements.TRexFlowMeasurement import TRexFlowMeasurement from lnst.RecipeCommon.Perf.Measurements.StatCPUMeasurement import StatCPUMeasurement diff --git a/lnst/Recipes/ENRT/OvS_DPDK_PvP.py b/lnst/Recipes/ENRT/OvS_DPDK_PvP.py index 860f060..aa8af24 100644 --- a/lnst/Recipes/ENRT/OvS_DPDK_PvP.py +++ b/lnst/Recipes/ENRT/OvS_DPDK_PvP.py @@ -10,8 +10,12 @@ from lnst.Common.IpAddress import ipaddress from lnst.RecipeCommon.Ping import PingTestAndEvaluate, PingConf from lnst.Tests import Ping from lnst.Tests.TestPMD import TestPMD -from lnst.RecipeCommon.Perf import PerfTestAndEvaluate, PerfConf -from lnst.RecipeCommon.TRexMeasurementTool import TRexMeasurementTool + +from lnst.RecipeCommon.Perf.Recipe import Recipe as PerfRecipe +from lnst.RecipeCommon.Perf.Recipe import RecipeConf as PerfRecipeConf +from lnst.RecipeCommon.Perf.Measurements import Flow as PerfFlow +from lnst.RecipeCommon.Perf.Measurements import TRexFlowMeasurement +from lnst.RecipeCommon.Perf.Measurements import StatCPUMeasurement
from lnst.RecipeCommon.LibvirtControl import LibvirtControl
@@ -43,7 +47,7 @@ class PvPTestConf(object): self.dut = self.DUTConf() self.guest = self.GuestConf()
-class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): +class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfRecipe): m1 = HostReq() m1.eth0 = DeviceReq(label="net1", driver=RecipeParam("driver")) m1.eth1 = DeviceReq(label="net1", driver=RecipeParam("driver")) @@ -72,6 +76,8 @@ class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): dev_intr_cpu = IntParam(default=0)
+ cpu_perf_tool = Param(default=StatCPUMeasurement) + perf_duration = IntParam(default=60) perf_iterations = IntParam(default=5) perf_msg_size = IntParam(default=64) @@ -132,7 +138,7 @@ class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfTestAndEvaluate):
perf_config = self.generate_perf_config(config) result = self.perf_test(perf_config) - self.perf_evaluate_and_report(perf_config, result, baseline=None) + self.perf_report_and_evaluate(result) finally: self.test_wide_deconfiguration(config)
@@ -175,18 +181,32 @@ class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): return config
def generate_perf_config(self, config): - conf = PerfConf( - perf_tool = TRexMeasurementTool(self.params.trex_dir), - test_type = "pvp_loop_rate", + flows = [] + for src_nic, dst_nic in zip(config.generator.nics, config.dut.nics): + src_bind = dict(mac_addr=src_nic.hwaddr, + pci_addr=src_nic.bus_info, + ip_addr=src_nic.ips[0]) + dst_bind = dict(mac_addr=dst_nic.hwaddr, + pci_addr=dst_nic.bus_info, + ip_addr=dst_nic.ips[0]) + flows.append(PerfFlow( + type = "pvp_loop_rate", generator = config.generator.host, - generator_bind = config.generator.nics, + generator_bind = src_bind, receiver = config.dut.host, - receiver_bind = config.dut.nics, + receiver_bind = dst_bind, msg_size = self.params.perf_msg_size, duration = self.params.perf_duration, - iterations = self.params.perf_iterations, - streams = self.params.perf_streams) - return conf + parallel_streams = self.params.perf_streams)) + + return PerfRecipeConf( + measurements=[ + self.params.cpu_perf_tool([config.generator.host, + config.dut.host, + config.guest.host]), + TRexFlowMeasurement(flows, self.params.trex_dir) + ], + iterations=self.params.perf_iterations)
def test_wide_deconfiguration(self, config): try: @@ -358,7 +378,7 @@ class OvSDPDKPvPRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): guest_ip_job = host.run("gethostip -d {}".format(guest_conf.name)) guest_ip = guest_ip_job.stdout.strip()
- guest = self.ctl.connect_host(guest_ip, timeout=60) + guest = self.ctl.connect_host(guest_ip, timeout=60, machine_id="guest1") guest_conf.host = guest
for i, nic in enumerate(guest_conf.vhost_nics):
Wed, Nov 14, 2018 at 04:04:53PM CET, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
First of all this includes the reimplementation of the TRexMeasurement module and class into the TRexFlowMeasurement class that implements the BaseFlowMeasurement API and can be easily plugged into Perf.Recipe as a measurement. That said, it does have some specific restrictions specific to TRex:
- it still requires the trex_dir parameter telling it where to look for the TRex application.
- the measurement is port based but the configuration is flow based. Each port currently supports generation of a single flow so that's what is expected on the configuration part. However results are reported per port (with association to the generated flow). It's important to note that while the "tx_rate" statistics represent the generated flow, the "rx_rate" statistics only talk about received packets regardles of which flow they belong to.
The OvSDPDKPvPRecipe class was updated to work with the redesigned implementation of the PerfRecipe base class and it's methods for measurements and reporting results.
The OvSDPDKPvPRecipe now also requests a StatCPUMeasurement measurement for all the hosts involved in the test.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
Tried to apply the whole patchset and got a trailing whitespace error:
Applying: refactoring of OvS_DPDK_PvP recipe and related classes .git/rebase-apply/patch:143: trailing whitespace. rx_delta = i["measurement"][port]["ipackets"] - prev_rx_val warning: 1 line adds whitespace errors.
Wed, Nov 14, 2018 at 04:04:33PM CET, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
Hi all,
this is a second version of this patchset, v2 changes include:
- the Namespace.run method reuses other methods instead of duplicating
code
- CPUStatMonitor comment explains the interval parameter
- CPUStatMonitor bugfix for signal handling
- fixed typo in IperfFlowMeasurement unit for cpu utilization
- move and reimplementation of the TRex measurement class to fit into
the whole redesign of the lnst.RecipeCommon.Perf package
- updates to the OvSDPDKPVPRecipe:
- stability improvements
- refactoring to use the redesigned lnst.RecipeCommon.Perf package
Thanks, -Ondrej
Besides my comments for the individual patches I went over all of the patches. Looks good.
Acked-by: Jan Tluka jtluka@redhat.com
lnst-developers@lists.fedorahosted.org