NEW API Discussion
by Ondrej Lichtner
Hi all,
for the past couple of weeks I've been going over the meeting recordings
we've had wrt the new Python API of LNST. I've been collecting
everything into a single file that I'm appending to this email. I'm
sending it here so that everyone can join the discussion before the
implementation itself begins. I'll warn you thougn... it's LONG :)
!!!NOTE it's not complete yet, I'm sending it now because we have an
upstream meeting planned for later today, namely Device/Interface API is
not complete.
The structure of the file is following:
1. commented pseudo code of how Test Modules will look like - they'll be
instantiated on the Controller and send ad-hoc to the slave where
they'll be executed --> no more synchronization on test start...
2. commented pseudo code of how Tasks will look like, they'll define
both the network requirements and the test execution as well.
3. short rough idea of how the tests/recipes will be executed.
4. 1st version of the API "specification"/documentation. Here I tried to
go through the current *API objects we currently have and make them more
"Pythonic", thinking of how they'll be used from a Task. I tried writing
it as class-method-attribute definitions with some documentation so
hopefully it makes some sense... Like I've said before,
Device/Interfaces are not complete so there's a lot missing there.
Please take a look and provide feedback. I'm sure there are other parts
in addition to Device/Interface APIs that are missing something so I'll
appreciate any help :).
================================================================================
new_api file:
1. test modules
class BaseTestModule:
def __init__(self, **kwargs):
#by defaults loads the params into self.params - no checks pseudocode:
for x in vars(self):
if isinstance(x, BaseType):
param_class = self.getattr(x)
try:
val = kwargs[x]
except KeyError:
if param_class.is_mandatory():
raise TestModuleError("Option x is mandatory")
self.setattr(x.params, param_class.construct(val))
del kwargs[x]
for x in kwargs.keys():
log.error("Undefined parameter x")
if len(kwargs):
raise TestModuleError("Undefined TestModule parameters")
def run():
#needs to be over-ridden - throw an exception to notify the test developer
class MyTest(BaseTestModule):
param = ParamType()
param2 = ParamType2()
param3 = Multiparam(ParamType())
#optional __init__
#def __init__(self, **kwargs):
#super(MyTest).__init__(kwargs)
#additional tester defined checks
def run():
#do my test
#parameters available in self.params
#in Task:
import lnst
#module lnst.modules will dynamically look for module classes in configured
#locations, similar to how we do it now
ping = lnst.modules.Ping(dst=m2.if1.ip[0], count=100, interval=0.1)
m1.run(ping)
================================================
2. Tasks:
class BaseTask(object):
def __init__(self):
#initialize instance specific requirements
self.requirements = Requirements()
for x in dir(self):
val = getattr(self, x)
setattr(self.requirements, x, val)
def test():
raise Exception("Method test MUST be defined.")
class MyTask(lnst.BaseTask):
#class-wide definition of requirements
m1 = HostSel(param="val", ...)
m1.if1 = IfaceSel(l2net="xyz", param="val", ...)
m2 = HostSel(param="val", ...)
m2.if1 = IfaceSel(l2net="xyz", param="val", ...)
def __init__(self, **kwargs):
super(self, lnst.BaseTask).__init__()
#do something with kwargs
#adjust instance specific requirements
self.requirements.m3 = HostSel(...)
def test():
self.matched.m1.run(Module)
self.matched.m1.run("command")
#or
def test(m1, m2):
m1.run(Module)
m2.run("command")
================================================
3. Running Tasks:
from MyTasks import MyTask
import lnst
task_instance = MyTask(params)
lnst(args)
lnst.run(task_instance)
OR
lnst-ctl -d run MyTask.py -- task_params
# looks for NAME class in the NAME.py file (MyTask in this case for which
# the condition "isinstance(NAME, BaseTask)" must be True
# could also run for all classes in the file where "isinstance(x, BaseTask)" is
# True. with the option to restrict to specific task class (or just run the
# first one?)... lnst-ctl rewritten to do the same as manually running the
# task from it's own python script
First do the second option - easier since we have this already, then refactor
the controller to create the lnst controller for the first option.
Aliases lose meaning - they're parameters passed to the MyTask __init__, when
using the lnst-ctl CLI, use "-- task_params"?? might not work for multiple tasks,
================================================
4. Tester facing API, inside the test() method:
Host objects available in self.matched.selector_name:
class Host: #HostAPI??? name can change
#attributes:
# dynamically filled object of Host attributes such as architecture and
# so on. Use example in test() would look like this:
# if host.params.arch == "x86":
# I separated this into the "params" object so I can overwrite its
# __getattr__ method and return None/UnknownParam exception for unknown
# parameters, and to avoid name conflicts with other attributes
params = object()
# dynamically filled object of NetDevice objects accessible directly as the
# object attributes:
# host.ifaces.eth0.set_ip(...)
# I separated this into the "ifaces" object to avoid name conflicts with
# other attributes
# creation of new NetDevices should be possible through simple assignement:
# m1.devs.new_team0 = TeamDevice(...)
# assignement of an incompatible Type or to an existing Device object will
# return an exception
# assignment of None? or del devs.new_team0 to deconfigure the device?
devs = object()
def run(what, bg=False, fail=False, timeout=60, path="", json=False, netns)
# will run "what" on the remote host
# "what" is either a Module object, or a string command that will be
# executed as a bash command
# "bg" when True, runs "what" on background - the run() call
# immediately returns, and "timeout" is ignored, the background
# process can be controlled through the returned Job object
# "fail" if True then the Job is expected to fail, and will be reported
# as PASSed if it does
# "timeout" in seconds, determines how long to block test execution for
# before killing the Job. Only when running in foreground
# "path" changes the current working directory to the specified path
# before "what" is executed and changes back after execution is
# finished.
# "tool" changes the current working directory to the directory of a
# speficied test_tool before "what" is executed and changes back
# after execution is finished.
# !!!!!!! this is from the current API and i'm not yet sure how we
# !!!!!!! want to handle those... so for now I'll keep it
# "json" if True will attempt to parse the returned stdout of the Job
# as json into a dictionary
# "netns" Job will be run in the specified network namespace
# Returns a Job object
def config(option, value)
# copied from old API, provides a shortcut for "echo $value # >/proc/or/sys/path"
# and returns the original value when the test is finished
def sync_resources(srcpath="", dstpath="", recursive=False)
# copies the specified file from the controller to the specified
# destination path, if recursive == True and srcpath refers to a
# directory it copies the entire directory
def {enable, disable}_service(service)
# copied from old API, enables or disables the specified service
def add_{bond, bridge,...}(params)
# this is how we can currently dynamically create net devices on the
# hosts. Even with the new assignment-based approach this could still,
# be usefull, though the method would need to be dynamically created to
# avoid useless work when adding a new netdev type. Something like:
# add_device("name", "Type", params) which would then do
# self.devs.name = TypeDevice(params) ??
def del_device(name)
# removes the specified device, probably easier (more logical?) to do
# this then "devs.name = None" and "del devs.name" would be unreliable
class Device: #DeviceAPI, InterfaceAPI? name can change...
# attributes:
# dynamically created Device attributes such as driver and so on. Use
# example in test() would look like this:
# if host.devs.eth0.driver == "ixgbe":
# achieved through rewriting of the __getattr__ method of the Device class
# should return None or throw UnknownParam exception for unknown parameters
# this should directly mirror the Device objects that are managed by the
# InterfaceManager on the Slave
# eg:
driver = something
mtu = something
ips = [IpAddress, ...]
class Job: #ProcessAPI? name can change...
#attributes:
# True if the Job finished, False if it's still running in the background
finished = bool
# contains the result data returned by the Job, None for bash commands
result = object
# contain the stdout and stderr generated by the job, None for Module Jobs
stdout = ""
stderr = ""
# simple True/False value indicating success/failure of the Job
passed = bool
def wait(timeout=0):
# for background jobs, will wait until the job finished
# "timeout" in seconds, determines how long to wait for. After timeout
# reached, nothing happens, status of the job can be checked with the
# "finished" attribute. If timeout=0, then wait forever.
def kill(signalnum=signal.SIGKILL):
# sends the specified signal to the process of the Job running in
# background
# "signalnum" the signal to be sent
6 years, 6 months
[PATCH] recipes: fix baseline configuration for IPv6 SCTP tests
by Jan Tluka
A typo caused that SCTP netperf mesurements were not compared to baselines.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
recipes/regression_tests/phase1/3_vlans.py | 2 +-
recipes/regression_tests/phase1/3_vlans_over_bond.py | 2 +-
recipes/regression_tests/phase1/bonding_test.py | 2 +-
recipes/regression_tests/phase1/simple_netperf.py | 2 +-
recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py | 2 +-
recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py | 2 +-
recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py | 2 +-
recipes/regression_tests/phase2/3_vlans_over_team.py | 2 +-
recipes/regression_tests/phase2/team_test.py | 4 ++--
.../phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py | 2 +-
recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py | 2 +-
recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py | 2 +-
12 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/recipes/regression_tests/phase1/3_vlans.py b/recipes/regression_tests/phase1/3_vlans.py
index 0acff75..1d2cee8 100644
--- a/recipes/regression_tests/phase1/3_vlans.py
+++ b/recipes/regression_tests/phase1/3_vlans.py
@@ -442,7 +442,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = m2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase1/3_vlans_over_bond.py b/recipes/regression_tests/phase1/3_vlans_over_bond.py
index a9705c0..49f35a9 100644
--- a/recipes/regression_tests/phase1/3_vlans_over_bond.py
+++ b/recipes/regression_tests/phase1/3_vlans_over_bond.py
@@ -442,7 +442,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = m2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase1/bonding_test.py b/recipes/regression_tests/phase1/bonding_test.py
index 711d6ae..e293a79 100644
--- a/recipes/regression_tests/phase1/bonding_test.py
+++ b/recipes/regression_tests/phase1/bonding_test.py
@@ -421,7 +421,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = m2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase1/simple_netperf.py b/recipes/regression_tests/phase1/simple_netperf.py
index 255341b..7e30955 100644
--- a/recipes/regression_tests/phase1/simple_netperf.py
+++ b/recipes/regression_tests/phase1/simple_netperf.py
@@ -379,7 +379,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = m2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
index 952eca5..265d038 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
@@ -505,7 +505,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = g3.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py
index 4cf725b..81e2cb9 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py
@@ -434,7 +434,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = h2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py
index 8306c99..c7cb984 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py
@@ -434,7 +434,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = h2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase2/3_vlans_over_team.py b/recipes/regression_tests/phase2/3_vlans_over_team.py
index 5e5af9f..89af6a0 100644
--- a/recipes/regression_tests/phase2/3_vlans_over_team.py
+++ b/recipes/regression_tests/phase2/3_vlans_over_team.py
@@ -442,7 +442,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = m2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase2/team_test.py b/recipes/regression_tests/phase2/team_test.py
index 027de98..db327b3 100644
--- a/recipes/regression_tests/phase2/team_test.py
+++ b/recipes/regression_tests/phase2/team_test.py
@@ -433,7 +433,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = m2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
@@ -676,7 +676,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = m1.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py
index 6d6743f..3c1fd39 100644
--- a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py
+++ b/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py
@@ -508,7 +508,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = g3.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py
index 3eaf75c..36f48a8 100644
--- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py
+++ b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py
@@ -438,7 +438,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = h2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py
index 8af2f57..1d473c7 100644
--- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py
+++ b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py
@@ -436,7 +436,7 @@ for setting in offload_settings:
result_sctp.set_parameter("num_parallel", nperf_num_parallel)
baseline = perf_api.get_baseline_of_result(result_sctp)
- netperf_baseline_template(netperf_cli_sctp, baseline)
+ netperf_baseline_template(netperf_cli_sctp6, baseline)
sctp_res_data = h2.run(netperf_cli_sctp6,
timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
--
2.7.4
6 years, 8 months
[PATCH] regression-tests: add ipsec_esp_ah_comp test
by Kamil Jerabek
This patch adds new test to our regression_tests phase3. The topology
is the same as in phase1 simple_netperf test.
This test covers performance of ipsec over ethernet. Ping and netperf is run.
Covered are both tunnel and transport mode. All tests are done with esp, ah and
comp options set together. All combinations of cipher and hash functions listed
below are tested. Netperf message size is set to 1400 bytes by default.
ciphers: aes, des, des3_ede, cast5, blowfish, serpent, twofish
hash functions: hmac(md5), sha1, sha256
Signed-off-by: Kamil Jerabek <kjerabek(a)redhat.com>
---
.../phase3/ipsec_esp_ah_comp.README | 98 ++++
.../regression_tests/phase3/ipsec_esp_ah_comp.py | 586 +++++++++++++++++++++
.../regression_tests/phase3/ipsec_esp_ah_comp.xml | 50 ++
3 files changed, 734 insertions(+)
create mode 100644 recipes/regression_tests/phase3/ipsec_esp_ah_comp.README
create mode 100644 recipes/regression_tests/phase3/ipsec_esp_ah_comp.py
create mode 100644 recipes/regression_tests/phase3/ipsec_esp_ah_comp.xml
diff --git a/recipes/regression_tests/phase3/ipsec_esp_ah_comp.README b/recipes/regression_tests/phase3/ipsec_esp_ah_comp.README
new file mode 100644
index 0000000..5246c33
--- /dev/null
+++ b/recipes/regression_tests/phase3/ipsec_esp_ah_comp.README
@@ -0,0 +1,98 @@
+Topology:
+
+ switch
+ +------+
+ | |
+ | |
+ +-------------+ +-------------+
+ | | | |
+ | | | |
+ | +------+ |
+ | |
+ | |
+ +-+--+ +-+--+
++-------|eth1|------+ +-------|eth1|------+
+| +-+--+ | | +-+--+ |
+| | | |
+| | | |
+| | | |
+| | | |
+| | | |
+| host1 | | host2 |
+| | | |
+| | | |
+| | | |
++-------------------+ +-------------------+
+
+Number of hosts: 2
+Host #1 description:
+ One ethernet device configured with ip addresses:
+ 192.168.99.1/24
+ fc00:1::1/64
+
+Host #2 description:
+ One ethernet device configured with ip addresses:
+ 192.168.100.1/24
+ fc00:2::1/64
+
+Test name:
+ ipsec_esp_ah_comp.py
+Test description:
+ Ping:
+ + count: 10
+ + interval: 0.1s
+ + between ipsec encrypted ethernet interfaces expecting PASS
+ Ping6:
+ + count: 10
+ + interval: 0.1s
+ + between ipsec encrypted ethernet interfaces expecting PASS
+ Netperf:
+ + duration: 60s
+ + TCP_STREAM and UDP_STREAM
+ + ipv4 and ipv6
+ + between ipsec encrypted ethernet interfaces
+ IPsec
+ + tested with esp, ah, comp options together
+ + tested with all listed ciphers and hash functions
+ + ciphers
+ + aes
+ + des
+ + des3_ede
+ + cast5
+ + blowfish
+ + serpent
+ + twofish
+ + hash functions
+ + hmac(md5)
+ + sha1
+ + sha256
+
+PerfRepo integration:
+ First, preparation in PerfRepo is required - you need to create Test objects
+ through the web interface that properly describe the individual Netperf
+ tests that this recipe runs. Don't forget to also add appropriate metrics.
+ For these Netperf tests it's always:
+ * throughput
+ * throughput_min
+ * throughput_max
+ * throughput_deviation
+
+ After that, to enable support for PerfRepo you need to create the file
+ vxlan_remote.mapping and define the following id mappings:
+ tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object
+ tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object
+ udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object
+ udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object
+
+ To enable result comparison agains baselines you need to create a Report in
+ PerfRepo that will store the baseline. Set up the Report to only contain results
+ with the same hash tag and then add a new mapping to the mapping file, with
+ this format:
+ <some_hash> = <report_id>
+
+ The hash value is automatically generated during test execution and added
+ to each result stored in PerfRepo. To get the Report id you need to open
+ that report in our browser and find if in the URL.
+
+ When running this recipe you should also define the 'product_name' alias
+ (e.g. RHEL7) in order to tag the result object in PerfRepo.
diff --git a/recipes/regression_tests/phase3/ipsec_esp_ah_comp.py b/recipes/regression_tests/phase3/ipsec_esp_ah_comp.py
new file mode 100644
index 0000000..b99dc41
--- /dev/null
+++ b/recipes/regression_tests/phase3/ipsec_esp_ah_comp.py
@@ -0,0 +1,586 @@
+from lnst.Controller.Task import ctl
+from lnst.Controller.PerfRepoUtils import perfrepo_baseline_to_dict
+from lnst.Controller.PerfRepoUtils import netperf_result_template
+
+from lnst.RecipeCommon.ModuleWrap import ping, ping6, netperf
+from lnst.RecipeCommon.IRQ import pin_dev_irqs
+from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment
+import re
+
+# ---------------------------
+# ALGORITHM AND CIPHER CONFIG
+# ---------------------------
+
+ciphers = {}
+
+def generate_key(length):
+ key = "0x"
+ key = key + length * "0b"
+ return key
+
+ciphers['aes'] = generate_key(16)
+ciphers['des'] = generate_key(8)
+ciphers['des3_ede'] = generate_key(24)
+ciphers['cast5'] = generate_key(16)
+ciphers['blowfish'] = generate_key(56)
+ciphers['serpent'] = generate_key(32)
+ciphers['twofish'] = generate_key(16)
+
+hashes = {}
+
+hashes['hmac(md5)'] = generate_key(16)
+hashes['sha1'] = generate_key(16)
+hashes['sha256'] = generate_key(16)
+
+# these does not work on RHEL6.6
+#hashes['sha384'] = generate_key(16)
+#hashes['sha512'] = generate_key(16)
+
+thresholds = {
+ 'aes': [ 100, 200 ],
+ 'des': [ 50, 80 ],
+ 'des3_ede': [ 80, 120 ],
+ 'cast5': [ 100, 150 ],
+ 'blowfish': [ 120, 200 ],
+ 'serpent': [ 120, 200 ],
+ 'twofish': [ 100, 250 ]
+}
+
+# ------
+# SETUP
+# ------
+
+mapping_file = ctl.get_alias("mapping_file")
+perf_api = ctl.connect_PerfRepo(mapping_file)
+
+product_name = ctl.get_alias("product_name")
+
+m1 = ctl.get_host("machine1")
+m2 = ctl.get_host("machine2")
+
+m1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf", "Custom"])
+m2.sync_resources(modules=["PacketAssert", "IcmpPing", "Icmp6Ping", "Netperf"])
+
+# ------
+# TESTS
+# ------
+
+ipv = ctl.get_alias("ipv")
+mtu = ctl.get_alias("mtu")
+netperf_duration = int(ctl.get_alias("netperf_duration"))
+nperf_reserve = int(ctl.get_alias("nperf_reserve"))
+nperf_confidence = ctl.get_alias("nperf_confidence")
+nperf_max_runs = int(ctl.get_alias("nperf_max_runs"))
+nperf_cpupin = ctl.get_alias("nperf_cpupin")
+nperf_cpu_util = ctl.get_alias("nperf_cpu_util")
+nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
+nperf_debug = ctl.get_alias("nperf_debug")
+nperf_max_dev = ctl.get_alias("nperf_max_dev")
+nperf_msg_size = ctl.get_alias("nperf_msg_size")
+pr_user_comment = ctl.get_alias("perfrepo_comment")
+ipsec_mode = ctl.get_alias("ipsec_mode")
+
+pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment)
+
+m1_if = m1.get_interface("eth")
+m2_if = m2.get_interface("eth")
+
+m1_if_name = m1_if.get_devname()
+m2_if_name = m2_if.get_devname()
+
+m1_if_addr = m1_if.get_ip()
+m2_if_addr = m2_if.get_ip()
+
+m1_if_addr6 = m1_if.get_ip(1)
+m2_if_addr6 = m2_if.get_ip(1)
+
+
+# add routing rulez ipv4
+# so the rtr knows where to send traffic destined to remote site
+m1.run("ip route add %s dev %s" % (m2_if_addr, m1_if_name))
+
+# so the rtr knows where to send traffic destined to remote site
+m2.run("ip route add %s dev %s" % (m1_if_addr, m2_if_name))
+
+# add routing rulez ipv6
+# so the rtr knows where to send traffic destined to remote site
+m1.run("ip route add %s dev %s" % (m2_if_addr6, m1_if_name))
+
+# so the rtr knows where to send traffic destined to remote site
+m2.run("ip route add %s dev %s" % (m1_if_addr6, m2_if_name))
+
+if nperf_msg_size is None:
+ nperf_msg_size = 1400
+
+if ipsec_mode is None:
+ ipsec_mode = "transport"
+
+res = m1.run("rpm -qa iproute", save_output=True)
+if (res.get_result()["res_data"]["stdout"].find("iproute-2") != -1):
+ m1_key="0x"
+else:
+ m1_key=""
+
+res = m2.run("rpm -qa iproute", save_output=True)
+if (res.get_result()["res_data"]["stdout"].find("iproute-2") != -1):
+ m2_key="0x"
+else:
+ m2_key=""
+
+if nperf_cpupin:
+ m1.run("service irqbalance stop")
+ m2.run("service irqbalance stop")
+
+ dev_list = [(m1, m1_phy), (m2, m2_phy)]
+
+ # this will pin devices irqs to cpu #0
+ for m, d in dev_list:
+ pin_dev_irqs(m, d, 0)
+
+nperf_opts = ""
+if nperf_cpupin and nperf_num_parallel == 1:
+ nperf_opts = " -T%s,%s" % (nperf_cpupin, nperf_cpupin)
+
+ctl.wait(15)
+
+def configure_ipsec(ciph_alg, ciph_key, hash_alg, hash_key, ip_version):
+ if ip_version == "ipv4":
+ m1_addr = m1_if_addr
+ m2_addr = m2_if_addr
+ else:
+ m1_addr = m1_if_addr6
+ m2_addr = m2_if_addr6
+
+ # configure policy and state
+ m1.run("ip xfrm policy flush")
+ m1.run("ip xfrm state flush")
+ m2.run("ip xfrm policy flush")
+ m2.run("ip xfrm state flush")
+
+ m1.run("ip xfrm policy add src %s dst %s dir out "\
+ "tmpl src %s dst %s proto comp spi 4 mode %s "\
+ "tmpl src %s dst %s proto esp spi 2 mode %s "\
+ "tmpl src %s dst %s proto ah spi 3 mode %s"
+ % (m1_addr, m2_addr,
+ m1_addr, m2_addr, ipsec_mode,
+ m1_addr, m2_addr, ipsec_mode,
+ m1_addr, m2_addr, ipsec_mode))
+ m1.run("ip xfrm policy add src %s dst %s dir in "\
+ "tmpl src %s dst %s proto comp spi 1 mode %s level use "\
+ "tmpl src %s dst %s proto esp spi 2 mode %s "\
+ "tmpl src %s dst %s proto ah spi 3 mode %s"
+ % (m2_addr, m1_addr,
+ m2_addr, m1_addr, ipsec_mode,
+ m2_addr, m1_addr, ipsec_mode,
+ m2_addr, m1_addr, ipsec_mode))
+
+ m1.run("ip xfrm state add "\
+ "src %s dst %s proto comp spi 4 mode %s "\
+ "comp deflate"\
+ % (m1_addr, m2_addr, ipsec_mode))
+ m1.run("ip xfrm state add "\
+ "src %s dst %s proto comp spi 1 mode %s "\
+ "comp deflate"\
+ % (m2_addr, m1_addr, ipsec_mode))
+
+ m1.run("ip xfrm state add "\
+ "src %s dst %s proto esp spi 2 mode %s "\
+ "enc '%s' %s"\
+ % (m1_addr, m2_addr, ipsec_mode,
+ ciph_alg, ciph_key))
+ m1.run("ip xfrm state add "\
+ "src %s dst %s proto esp spi 2 mode %s "\
+ "enc '%s' %s"\
+ % (m2_addr, m1_addr, ipsec_mode,
+ ciph_alg, ciph_key))
+
+ m1.run("ip xfrm state add "\
+ "src %s dst %s proto ah spi 3 mode %s "\
+ "auth '%s' %s"
+ % (m1_addr, m2_addr, ipsec_mode,
+ hash_alg, hash_key))
+ m1.run("ip xfrm state add "\
+ "src %s dst %s proto ah spi 3 mode %s "\
+ "auth '%s' %s"
+ % (m2_addr, m1_addr, ipsec_mode,
+ hash_alg, hash_key))
+
+
+ # second machine
+ m2.run("ip xfrm policy add src %s dst %s dir out "\
+ "tmpl src %s dst %s proto comp spi 1 mode %s "\
+ "tmpl src %s dst %s proto esp spi 2 mode %s "\
+ "tmpl src %s dst %s proto ah spi 3 mode %s"
+ % (m2_addr, m1_addr,
+ m2_addr, m1_addr, ipsec_mode,
+ m2_addr, m1_addr, ipsec_mode,
+ m2_addr, m1_addr, ipsec_mode))
+ m2.run("ip xfrm policy add src %s dst %s dir in "\
+ "tmpl src %s dst %s proto comp spi 4 mode %s level use "\
+ "tmpl src %s dst %s proto esp spi 2 mode %s "\
+ "tmpl src %s dst %s proto ah spi 3 mode %s"
+ % (m1_addr, m2_addr,
+ m1_addr, m2_addr, ipsec_mode,
+ m1_addr, m2_addr, ipsec_mode,
+ m1_addr, m2_addr, ipsec_mode))
+
+ m2.run("ip xfrm state add "\
+ "src %s dst %s proto comp spi 4 mode %s "\
+ "comp deflate"\
+ % (m1_addr, m2_addr, ipsec_mode))
+ m2.run("ip xfrm state add "\
+ "src %s dst %s proto comp spi 1 mode %s "\
+ "comp deflate"\
+ % (m2_addr, m1_addr, ipsec_mode))
+
+ m2.run("ip xfrm state add "\
+ "src %s dst %s proto esp spi 2 mode %s "\
+ "enc '%s' %s"\
+ % (m1_addr, m2_addr, ipsec_mode,
+ ciph_alg, ciph_key))
+ m2.run("ip xfrm state add "\
+ "src %s dst %s proto esp spi 2 mode %s "\
+ "enc '%s' %s"\
+ % (m2_addr, m1_addr, ipsec_mode,
+ ciph_alg, ciph_key))
+
+ m2.run("ip xfrm state add "\
+ "src %s dst %s proto ah spi 3 mode %s "\
+ "auth '%s' %s"\
+ % (m1_addr, m2_addr, ipsec_mode,
+ hash_alg, hash_key))
+ m2.run("ip xfrm state add "\
+ "src %s dst %s proto ah spi 3 mode %s "\
+ "auth '%s' %s"\
+ % (m2_addr, m1_addr, ipsec_mode,
+ hash_alg, hash_key))
+
+
+for ciph_alg, ciph_key in ciphers.iteritems():
+ for hash_alg, hash_key in hashes.iteritems():
+ if ipv in [ 'ipv4', 'both']:
+ configure_ipsec(ciph_alg, ciph_key, hash_alg, hash_key, "ipv4")
+ # ------
+ # TESTS
+ # ------
+ dump = m1.run("tcpdump -i %s -nn -vv" % m1_if_name, bg=True)
+
+ # ping + PacketAssert
+ assert_mod = ctl.get_module("PacketAssert",
+ options={
+ "interface": m2_if_name,
+ "filter": "ah",
+ "grep_for": [ "AH\(spi=0x00000003",
+ "ESP\(spi=0x00000002" ],
+ "min": 10
+ })
+
+ assert_proc = m2.run(assert_mod, bg=True)
+
+ ping_mod = ctl.get_module("IcmpPing",
+ options={
+ "addr": m2_if_addr,
+ "count": 10,
+ "interval": 0.1})
+
+ ctl.wait(2)
+
+ m1.run(ping_mod)
+
+ ctl.wait(2)
+
+ assert_proc.intr()
+
+ dump.intr()
+
+ m1.run("ip -s xfrm pol")
+ m1.run("ip -s xfrm state")
+
+ # ping test with bigger size to check compression is used
+ pkt_capture = m2.run("tcpdump -i %s ah" % m2_if_name,
+ save_output=True,
+ bg=True)
+ ctl.wait(3)
+ ping_mod.update_options({ "size": int(mtu) - 28 })
+ ping_mod.update_options({ "count": 1 })
+ m1.run(ping_mod)
+ ctl.wait(3)
+
+ pkt_capture.intr()
+
+ stdout = pkt_capture.get_result()["res_data"]["stdout"]
+
+ small_ping_fail=0
+ re_length = ".*length ([0-9]*)"
+ m = re.match(re_length, stdout)
+ if m:
+ pkt_len = int(m.group(1))
+ if pkt_len > (int(mtu) - 28)/2:
+ # failed
+ small_ping_fail=1
+ else:
+ small_ping_fail=1
+
+ comp_mod = ctl.get_module("Custom")
+ if small_ping_fail != 0:
+ comp_mod.update_options({ "fail": "Check of compression for bigger packets failed" })
+ else:
+ comp_mod.update_options({ "passed": "Check of compression for bigger packets passed" })
+
+ m1.run(comp_mod)
+
+ # fragmentation of packets bigger than mtu
+ dump = m1.run("tcpdump -i %s -nn -vv" % m1_if_name, bg=True)
+
+ ping_mod.update_options({ "size": 2*mtu })
+ ping_mod.update_options({ "count": 10 })
+
+ m1.run(ping_mod)
+ dump.intr()
+
+ # prepare PerfRepo result for tcp
+ result_tcp = perf_api.new_result("tcp_ipv4_id",
+ "tcp_ipv4_result",
+ hash_ignore=[
+ r'kernel_release',
+ r'redhat_release'])
+ result_tcp.add_tag(product_name)
+
+ if nperf_num_parallel > 1:
+ result_tcp.add_tag("multithreaded")
+ result_tcp.set_parameter('num_parallel', nperf_num_parallel)
+
+ result_tcp.set_parameter('cipher_alg', ciph_alg)
+ result_tcp.set_parameter('hash_alg', hash_alg)
+
+ baseline = perf_api.get_baseline_of_result(result_tcp)
+ baseline = perfrepo_baseline_to_dict(baseline)
+
+
+ tcp_res_data = netperf((m1, m1_if, 0, {"scope": 0}),
+ (m2, m2_if, 0, {"scope": 0}),
+ client_opts={"duration" : netperf_duration,
+ "testname" : "TCP_STREAM",
+ "confidence" : nperf_confidence,
+ "num_parallel" : nperf_num_parallel,
+ "cpu_util" : nperf_cpu_util,
+ "runs": nperf_max_runs,
+ "debug": nperf_debug,
+ "max_deviation": nperf_max_dev,
+ "msg_size" : nperf_msg_size,
+ "netperf_opts": nperf_opts,
+ "threshold": "%s Mbits/sec"
+ % thresholds[ciph_alg][0]},
+ baseline = baseline,
+ timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
+
+ netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
+ perf_api.save_result(result_tcp)
+
+ # prepare PerfRepo result for udp
+ result_udp = perf_api.new_result("udp_ipv4_id",
+ "udp_ipv4_result",
+ hash_ignore=[
+ r'kernel_release',
+ r'redhat_release'])
+ result_udp.add_tag(product_name)
+
+ if nperf_num_parallel > 1:
+ result_udp.add_tag("multithreaded")
+ result_udp.set_parameter('num_parallel', nperf_num_parallel)
+
+ result_udp.set_parameter('cipher_alg', ciph_alg)
+ result_udp.set_parameter('hash_alg', hash_alg)
+
+ baseline = perf_api.get_baseline_of_result(result_udp)
+ baseline = perfrepo_baseline_to_dict(baseline)
+
+ udp_res_data = netperf((m1, m1_if, 0, {"scope": 0}),
+ (m2, m2_if, 0, {"scope": 0}),
+ client_opts={"duration" : netperf_duration,
+ "testname" : "UDP_STREAM",
+ "confidence" : nperf_confidence,
+ "num_parallel" : nperf_num_parallel,
+ "cpu_util" : nperf_cpu_util,
+ "runs": nperf_max_runs,
+ "debug": nperf_debug,
+ "max_deviation": nperf_max_dev,
+ "msg_size" : nperf_msg_size,
+ "netperf_opts": nperf_opts,
+ "threshold": "%s Mbits/sec"
+ % thresholds[ciph_alg][1]},
+ baseline = baseline,
+ timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
+
+ netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
+ perf_api.save_result(result_udp)
+
+ if ipv in [ 'ipv6', 'both']:
+ configure_ipsec(ciph_alg, ciph_key, hash_alg, hash_key, "ipv6")
+ # ------
+ # TESTS
+ # ------
+ dump = m1.run("tcpdump -i %s -nn -vv" % m1_if_name, bg=True)
+
+ # ping + PacketAssert
+ assert_mod = ctl.get_module("PacketAssert",
+ options={
+ "interface": m2_if_name,
+ "filter": "ah",
+ "grep_for": [ "AH\(spi=0x00000003",
+ "ESP\(spi=0x00000002" ],
+ "min": 10
+ })
+
+ assert_proc = m2.run(assert_mod, bg=True)
+
+ ping_mod = ctl.get_module("Icmp6Ping",
+ options={
+ "addr": m2_if_addr6,
+ "count": 10,
+ "interval": 0.1})
+
+ ctl.wait(2)
+
+ m1.run(ping_mod)
+
+ ctl.wait(2)
+
+ assert_proc.intr()
+
+ dump.intr()
+
+ m1.run("ip -s xfrm pol")
+ m1.run("ip -s xfrm state")
+
+ # ping test with bigger size to check compression is used
+ pkt_capture = m2.run("tcpdump -i %s ah" % m2_if_name,
+ save_output=True,
+ bg=True)
+ ctl.wait(3)
+ ping_mod.update_options({ "size": int(mtu) - 28 })
+ ping_mod.update_options({ "count": 1 })
+ m1.run(ping_mod)
+ ctl.wait(3)
+
+ pkt_capture.intr()
+
+ stdout = pkt_capture.get_result()["res_data"]["stdout"]
+
+ small_ping_fail=0
+ re_length = ".*length ([0-9]*)"
+ m = re.match(re_length, stdout)
+ if m:
+ pkt_len = int(m.group(1))
+ if pkt_len > (int(mtu) - 28)/2:
+ # failed
+ small_ping_fail=1
+ else:
+ small_ping_fail=1
+
+ comp_mod = ctl.get_module("Custom")
+ if small_ping_fail != 0:
+ comp_mod.update_options({ "fail": "Check of compression for bigger packets failed" })
+ else:
+ comp_mod.update_options({ "passed": "Check of compression for bigger packets passed" })
+
+ m1.run(comp_mod)
+
+ # fragmentation of packets bigger than mtu
+ dump = m1.run("tcpdump -i %s -nn -vv" % m1_if_name, bg=True)
+
+ ping_mod.update_options({ "size": 2*mtu })
+ ping_mod.update_options({ "count": 10 })
+
+ m1.run(ping_mod)
+ dump.intr()
+
+ # prepare PerfRepo result for tcp
+ result_tcp = perf_api.new_result("tcp_ipv6_id",
+ "tcp_ipv6_result",
+ hash_ignore=[
+ r'kernel_release',
+ r'redhat_release'])
+ result_tcp.add_tag(product_name)
+
+ if nperf_num_parallel > 1:
+ result_tcp.add_tag("multithreaded")
+ result_tcp.set_parameter('num_parallel', nperf_num_parallel)
+
+ result_tcp.set_parameter('cipher_alg', ciph_alg)
+ result_tcp.set_parameter('hash_alg', hash_alg)
+
+ baseline = perf_api.get_baseline_of_result(result_tcp)
+ baseline = perfrepo_baseline_to_dict(baseline)
+
+
+ tcp_res_data = netperf((m1, m1_if, 1, {"scope": 0}),
+ (m2, m2_if, 1, {"scope": 0}),
+ client_opts={"duration" : netperf_duration,
+ "testname" : "TCP_STREAM",
+ "confidence" : nperf_confidence,
+ "num_parallel" : nperf_num_parallel,
+ "cpu_util" : nperf_cpu_util,
+ "runs": nperf_max_runs,
+ "debug": nperf_debug,
+ "max_deviation": nperf_max_dev,
+ "msg_size" : nperf_msg_size,
+ "threshold": "%s Mbits/sec"
+ % thresholds[ciph_alg][0],
+ "netperf_opts" : nperf_opts + "-6"},
+ baseline = baseline,
+ timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
+
+ netperf_result_template(result_tcp, tcp_res_data)
+ result_tcp.set_comment(pr_comment)
+ perf_api.save_result(result_tcp)
+
+ # prepare PerfRepo result for udp
+ result_udp = perf_api.new_result("udp_ipv6_id",
+ "udp_ipv6_result",
+ hash_ignore=[
+ r'kernel_release',
+ r'redhat_release'])
+ result_udp.add_tag(product_name)
+
+ if nperf_num_parallel > 1:
+ result_udp.add_tag("multithreaded")
+ result_udp.set_parameter('num_parallel', nperf_num_parallel)
+
+ result_udp.set_parameter('cipher_alg', ciph_alg)
+ result_udp.set_parameter('hash_alg', hash_alg)
+
+ baseline = perf_api.get_baseline_of_result(result_udp)
+ baseline = perfrepo_baseline_to_dict(baseline)
+
+ udp_res_data = netperf((m1, m1_if, 1, {"scope": 0}),
+ (m2, m2_if, 1, {"scope": 0}),
+ client_opts={"duration" : netperf_duration,
+ "testname" : "UDP_STREAM",
+ "confidence" : nperf_confidence,
+ "num_parallel" : nperf_num_parallel,
+ "cpu_util" : nperf_cpu_util,
+ "runs": nperf_max_runs,
+ "debug": nperf_debug,
+ "max_deviation": nperf_max_dev,
+ "msg_size" : nperf_msg_size,
+ "threshold": "%s Mbits/sec"
+ % thresholds[ciph_alg][1],
+ "netperf_opts" : nperf_opts + "-6"},
+ baseline = baseline,
+ timeout = (netperf_duration + nperf_reserve)*nperf_max_runs)
+
+ netperf_result_template(result_udp, udp_res_data)
+ result_udp.set_comment(pr_comment)
+ perf_api.save_result(result_udp)
+
+m1.run("ip xfrm policy flush")
+m1.run("ip xfrm state flush")
+m2.run("ip xfrm policy flush")
+m2.run("ip xfrm state flush")
+
+if nperf_cpupin:
+ m1.run("service irqbalance start")
+ m2.run("service irqbalance start")
diff --git a/recipes/regression_tests/phase3/ipsec_esp_ah_comp.xml b/recipes/regression_tests/phase3/ipsec_esp_ah_comp.xml
new file mode 100644
index 0000000..c7c8e68
--- /dev/null
+++ b/recipes/regression_tests/phase3/ipsec_esp_ah_comp.xml
@@ -0,0 +1,50 @@
+<lnstrecipe>
+ <define>
+ <alias name="ipv" value="both" />
+ <alias name="mtu" value="1450" />
+ <alias name="netperf_duration" value="60" />
+ <alias name="nperf_reserve" value="20" />
+ <alias name="nperf_confidence" value="99,5" />
+ <alias name="nperf_max_runs" value="5"/>
+ <alias name="nperf_num_parallel" value="1"/>
+ <alias name="nperf_debug" value="0"/>
+ <alias name="nperf_max_dev" value="20%"/>
+ <alias name="mapping_file" value="ipsec_transport_esp_ah_comp.mapping"/>
+ <alias name="net_1" value="192.168.99"/>
+ <alias name="net6_1" value="fc00:1::"/>
+ <alias name="net_2" value="192.168.100"/>
+ <alias name="net6_2" value="fc00:2::"/>
+ <alias name="driver" value=""/>
+ </define>
+ <network>
+ <host id="machine1">
+ <interfaces>
+ <eth id="eth" label="localnet">
+ <params>
+ <param name="driver" value="{$driver}"/>
+ </params>
+ <addresses>
+ <address value="{$net_1}.1/24"/>
+ <address value="{$net6_1}1/64"/>
+ </addresses>
+ </eth>
+ </interfaces>
+ </host>
+ <host id="machine2">
+ <interfaces>
+ <eth id="eth" label="localnet">
+ <params>
+ <param name="driver" value="{$driver}"/>
+ </params>
+ <addresses>
+ <address value="{$net_2}.1/24"/>
+ <address value="{$net6_2}1/64"/>
+ </addresses>
+ </eth>
+ </interfaces>
+ </host>
+ </network>
+
+ <task python="ipsec_esp_ah_comp.py"/>
+
+</lnstrecipe>
--
2.5.5
6 years, 9 months
[PATCH] regression_tests: add driver alias to simple macsec
by Kamil Jerabek
This commit adds driver alias to our regression_tests/phase3 simple_macsec
test.
Signed-off-by: Kamil Jerabek <kjerabek(a)redhat.com>
---
recipes/regression_tests/phase3/simple_macsec.xml | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/recipes/regression_tests/phase3/simple_macsec.xml b/recipes/regression_tests/phase3/simple_macsec.xml
index 83fbccc..5075094 100644
--- a/recipes/regression_tests/phase3/simple_macsec.xml
+++ b/recipes/regression_tests/phase3/simple_macsec.xml
@@ -11,6 +11,7 @@
<alias name="nperf_max_dev" value="20%"/>
<alias name="mapping_file" value="simple_macsec.mapping" />
<alias name="net" value="192.168.0" />
+ <alias name="driver" value="ixgbe"/>
</define>
<network>
<host id="machine1">
@@ -19,6 +20,9 @@
<addresses>
<address>{$net}.1/24</address>
</addresses>
+ <params>
+ <param name="driver" value="{$driver}" />
+ </params>
</eth>
</interfaces>
</host>
@@ -28,6 +32,9 @@
<addresses>
<address>{$net}.2/24</address>
</addresses>
+ <params>
+ <param name="driver" value="{$driver}" />
+ </params>
</eth>
</interfaces>
</host>
--
2.5.5
6 years, 9 months