Handling Multi-host recipes
by Bill Peck
Just sending this to the list to get feedback. If you remember from
previous emails a recipe = a single system. a RecipeSet contains
mutliple recipes that need to be scheduled at the same time. Also, we
only want multi-host jobs to be scheduled on the same lab controller.
A couple problems present themselves right away.
1) If N-1 recipes are able to be scheduled on a single lab controller
then we don't want any of those possible systems since we couldn't
possibly fulfil the recipeset there.
2) If RecipeA can use HOSTA or HOSTB but RecipeB can only use HOSTA,
then we need to remove HOSTA from RecipeA's possible choices.
3) If the number of possible Systems in each Recipe is larger than the
number of Recipes in the RecipeSet Then there are enough choices to
prevent a dead lock. In other words, we can leave the choices alone.
All of the above has to be done for every lab controller the recipeSet
matches for. :-)
The following code seems to work. I have tested quite a few
possibilities and everything works out. It does seem to remove more
choices than needed in some situations but I can't see an easier way.
The good news is this is still pretty damn fast. And we only need to
do it once. (ok, if the inventory server data changes we need to do it
again), but not every 20 seconds! ;-)
I'm also attaching a small test script that I used to develop the ideas
below..
def processed_recipesets(*args):
recipesets = RecipeSet.query()\
.join(['recipes','status'])\
.filter(Recipe.status==TestStatus.by_name(u'Processed'))
for recipeset in recipesets:
bad_l_controllers = set()
# We only need to do this processing on multi-host recipes
if len(recipeset.recipes) == 1:
print "recipe ID %s moved from Processed to Queued" %
recipeset.recipes[0].id
recipeset.recipes[0].status = TestStatus.by_name(u'Queued')
continue
# Find all the lab controllers that this recipeset may run.
rsl_controllers = set(LabController.query()\
.join(['systems',
'queued_recipes',
'recipeset'])\
.filter(RecipeSet.id==recipeset.id).all())
# Any lab controllers that are not associated to all recipes in the
# recipe set must have those systems on that lab controller removed
# from any recipes. For multi-host all recipes must be schedulable
# on one lab controller
for recipe in recipeset.recipes:
rl_controllers = set(LabController.query()\
.join(['systems',
'queued_recipes'])\
.filter(Recipe.id==recipe.id).all())
bad_l_controllers =
bad_l_controllers.union(rl_controllers.difference(rsl_controllers))
for l_controller in rsl_controllers:
enough_systems = False
for recipe in recipeset.recipes:
systems = recipe.dyn_systems.filter(
System.lab_controller==l_controller
).all()
if len(systems) < len(recipeset.recipes):
break
else:
# There are enough choices We don't need to worry about dead
# locks
enough_systems = True
if not enough_systems:
# Eliminate bad choices.
for recipe in recipeset.recipes:
for tmprecipe in recipeset.recipes:
systemsa = set(recipe.dyn_systems.filter(
System.lab_controller==l_controller
).all())
systemsb = set(tmprecipe.dyn_systems.filter(
System.lab_controller==l_controller
).all())
if systemsa.difference(systemsb):
for rem_system in
systemsa.intersection(systemsb):
print "Removing %s from recipe id %s" %
(rem_system, recipe.id)
recipe.systems.remove(rem_system)
for recipe in recipeset.recipes:
count = 0
systems = recipe.dyn_systems.filter(
System.lab_controller==l_controller
).all()
for tmprecipe in recipeset.recipes:
tmpsystems = tmprecipe.dyn_systems.filter(
System.lab_controller==l_controller
).all()
if recipe != tmprecipe and \
systems == tmpsystems:
count += 1
if len(systems) <= count:
# Remove all systems from this lc on this rs.
bad_l_controllers =
bad_l_controllers.union([l_controller])
# Remove systems that are on bad lab controllers
# This means one of the recipes can be fullfilled on a lab
controller
# but not the rest of the recipes in the recipeSet.
# This could very well remove ALL systems from all recipes in this
# recipeSet. If that happens then the recipeSet cannot be scheduled
# and will be aborted by the abort process.
for recipe in recipeset.recipes:
for l_controller in bad_l_controllers:
systems = (recipe.dyn_systems.filter(
System.lab_controller==l_controller
).all()
)
for system in systems:
print "Removing %s from recipe id %s" % (system,
recipe.id)
recipe.systems.remove(system)
if recipe.systems:
# Set status to Queued
print "recipe ID %s moved from Processed to Queued" %
recipe.id
recipe.status = TestStatus.by_name(u'Queued')
else:
# Set status to Aborted
print "recipe ID %s moved from Processed to Aborted" %
recipe.id
recipe.recipeset.abort('Recipe ID %s does not match any
systems' % recipe.id)
session.flush()
#!/usr/bin/python
recipe1 = set(['a','b'])
recipe2 = set(['a','b','c','f'])
recipe3 = set(['a','b','c'])
recipe4 = set(['a','b'])
recipes = []
recipes.append(recipe1)
recipes.append(recipe2)
recipes.append(recipe3)
#recipes.append(recipe4)
for recipe in recipes:
print recipe
print "eliminate bad choices"
for i, recipe in enumerate(recipes):
for tmprecipe in recipes:
if len(tmprecipe) <= len(recipes):
if recipes[i].difference(tmprecipe):
print "Difference", recipes[i].difference(tmprecipe)
print "Intersection", recipes[i].intersection(tmprecipe)
for j in recipes[i].intersection(tmprecipe):
recipes[i].remove(j)
#recipes[i] = recipes[i].difference(tmprecipe)
for recipe in recipes:
print recipe
for i, recipe in enumerate(recipes):
count = 0
for j, tmprecipe in enumerate(recipes):
if i != j and recipe == tmprecipe:
count += 1
if len(recipe) <= count:
print "get rid of recipe", recipe
14 years, 8 months
Logan - subtree removal suggestion
by Marian Csontos
With all/most of logan now submitted to master branch into
Server/beaker/server, is there any purpose/positive value of keeping
Logan subtree?
Are you aware of other good candidates to be git-rm`ed?
-- Marian
14 years, 8 months
FUDcon berlin beaker/beakerlib session
by Petr Muller
Hi,
I'm going to have a small session about beaker/beakerlib, possibly with
a flavor of OS testing, on FUDcon Berlin at this weekend. I want
basically speak about Beaker project, it's intentions, parts and
practical stuff, and about how one can write tests using beakerlib. My
intention is to present the Fedora contributors with an options they can
use to utilize automated QA for better Fedora.
So I'm asking - is is there something any of you want me to possibly
emphasize, not to speak about, or have some other suggestions?
Petr
14 years, 9 months
Merging Logan into Server
by Bill Peck
I've been working on merging logans tables into the Server of Beaker. I
haven't pushed my changes up but I will soon. Thought I'd share a
little of how things are going.
With everything being in one database it makes the cross linking very
powerful. I imported two tests from existing rhts (no modifications)
and then did a few queries on the Distros I have imported.
The two tests are the generic /distribution/install test and the
/distribution/virt/start test. The install test (hmm.. maybe I should
rename tests to tasks now as DMalcolm suggested) should run for every
distro. The virt/start test only runs on i386,x86_64 and ia64.
So if we query a distro that happens to be s390x and ask it to give us a
list of tests that that distro can use. We only get the install test.
>>> print Distro.query()[1].arch, Distro.query()[1].tests().all()
s390x [Test(repo=None, name=u'/distribution/install', license=u'GPL',
update_date=datetime.datetime(2009, 6, 18, 14, 2, 25, 454665), nda=None,
description=u'Reports back on the Installation that was done',
owner_id=None, creation_date=datetime.datetime(2009, 6, 18, 14, 2, 25,
79307), version=u'1.8-46', valid=None, avg_time=1200,
path=u'/mnt/tests/distribution/install', destructive=None,
rpm=u'rh-tests-distribution-install-1.8-46.noarch.rpm', id=1)]
Here we have an x86_64 distro and it gets both tests.
>>> print Distro.query()[2].arch, Distro.query()[2].tests().all()
x86_64 [Test(repo=None, name=u'/distribution/install', license=u'GPL',
update_date=datetime.datetime(2009, 6, 18, 14, 2, 25, 454665), nda=None,
description=u'Reports back on the Installation that was done',
owner_id=None, creation_date=datetime.datetime(2009, 6, 18, 14, 2, 25,
79307), version=u'1.8-46', valid=None, avg_time=1200,
path=u'/mnt/tests/distribution/install', destructive=None,
rpm=u'rh-tests-distribution-install-1.8-46.noarch.rpm', id=1),
Test(repo=None, name=u'/distribution/virt/start', license=u'GPLv2',
update_date=datetime.datetime(2009, 6, 18, 14, 11, 17, 735755),
nda=None, description=u'Start virtual machines', owner_id=None,
creation_date=datetime.datetime(2009, 6, 18, 14, 11, 17, 611317),
version=u'2.0-14', valid=None, avg_time=7200,
path=u'/mnt/tests/distribution/virt/start', destructive=None,
rpm=u'rh-tests-distribution-virt-start-2.0-14.noarch.rpm', id=2)]
No rocket science involved here. Just a much nicer system than the old
one. :-) And tests can be excluded not just on arch and family like the
old system, we can exclude tests on minor updates now as well.
Distros().tests looks like this:
def tests(self):
"""
List of tests that support this distro
"""
tests = session.query(Test)
return tests.filter(
not_(or_(Test.id.in_(select([test_table.c.id]).
where(test_table.c.id==test_exclude_table.c.test_id).
where(test_exclude_table.c.arch_id==arch_table.c.id).
where(arch_table.c.id==self.arch_id)
),
Test.id.in_(select([test_table.c.id]).
where(test_table.c.id==test_exclude_table.c.test_id).
where(test_exclude_table.c.osmajor_id==osmajor_table.c.id).
where(osmajor_table.c.id==self.osversion.osmajor.id)
),
Test.id.in_(select([test_table.c.id]).
where(test_table.c.id==test_exclude_table.c.test_id).
where(test_exclude_table.c.osversion_id==osversion_table.c.id).
where(osversion_table.c.id==self.osversion.id)
),
)
)
)
I also imported an xml job which looked like this before I imported:
<job>
<workflow>generic_xml_workflow</workflow>
<submitter>bpeck(a)redhat.com</submitter>
<whiteboard>Test abort recipe</whiteboard>
<recipeSet>
<recipe testrepo='development' whiteboard='kernel'>
<distroRequires>
<distro_arch op='=' value='i386'/>
<distro_name op='=' value='RHEL5.3-Server-20090106.0'/>
</distroRequires>
<hostRequires>
<memory op='>=' value='1024'/>
</hostRequires>
<test role='STANDALONE' name='/distribution/install'/>
<test role='STANDALONE' name='/distribution/kernelinstall'>
<params>
<param name='KERNELARGNAME' value='kernel'/>
<param name='KERNELARGVARIANT' value='up'/>
<param name='KERNELARGVERSION' value='2.6.18-153.el5testabort'/>
</params>
</test>
</recipe>
</recipeSet>
</job>
then after importing the system sees it this way.
>>> recipe=Recipe.query()[3]
>>> print recipe.to_xml().toprettyxml()
<recipe arch="i386" distro="RHEL5.3-Server-20090106.0"
family="RedHatEnterpriseLinuxServer5" id="4" job_id="3"
recipe_set_id="3" status="Queued" variant="None">
<distroRequires>
<distro_arch op="=" value="i386"/>
<distro_name op="=" value="RHEL5.3-Server-20090106.0"/>
</distroRequires>
<hostRequires>
<memory op=">=" value="1024"/>
<system_type value="Machine"/>
</hostRequires>
<test avg_time="1200" id="3" name="/distribution/install"
result="None" role="STANDALONE" status="Queued">
<rpm name="rh-tests-distribution-install-1.8-46.noarch.rpm"/>
</test>
<test avg_time="7200" id="4" name="/distribution/kernelinstall"
result="None" role="STANDALONE" status="Queued">
<params>
<param name="KERNELARGNAME" value="kernel"/>
<param name="KERNELARGVARIANT" value="up"/>
<param name="KERNELARGVERSION" value="2.6.18-153.el5testabort"/>
</params>
<rpm name="rh-tests-distribution-kernelinstall-1.0-57.noarch.rpm"/>
</test>
</recipe>
The scheduler processed the distroRequires tag and filled that info in
the recipe attributes. I don't have it processing the hostRequires
yet. But very shortly. Notice the test records have additional
information filled in as well like avg_time (again, I should take this
opportunity and change that to max_time, /me makes a note).
The good thing is you will always be able to take an existing job and
re-feed it back into the system and the import will only pay attention
to the needed fields. We could have an option to not reprocess the
hostRequires and distroRequires if you really want the same run.
14 years, 9 months
short STAF example
by Marian Csontos
Hi folks,
here is an example of STAF's logging capabilities - e.g. what it logs
and how does the output look like.
The nice features of STAF logging are:
* delegated logging - all logging on one machine is forwarded to
another machine - e.g. test coordinator
* marshalling/unmarshalling of structured data - see below.
* introspection/querying - STAF can be used not only to log data, but
to query what logs are present, producing log summaries etc.
For test writers the first part may be of interest, as the plan is to
write a rhts2STAF adaptor for legacy rhts tests, so beakerlib and STAF
can be used within one test without problems.
For beaker developers, it's the Python code and results which are of
interest, as the planned STAF2Beaker adaptor will use such a code to
collect logs from test machine and feed them to Scheduler's results DB.
See the end of the attached file.
BTW, the same would work with Perl or Ruby, as STAF provides bindings
for these languages, and few more :-)
Cheers,
Marian
# -- FMT:Command line input with output inilined
# Do not get scared by all the UPPERCASE (it is not necessary, STAF is case insensitive,) it is used here only to distinguish STAF keywords.
# Set STAF_QUIET_MODE to limit junk lines printed by STAF:
$ export STAF_QUIET_MODE=1
# Script run manually - allocate handle to be used throughout the script:
$ export STAF_STATIC_HANDLE=$(STAF local HANDLE CREATE HANDLE NAME "query-example.sh")
# Handle 913 allocated.
# Fill in logs with some messages:
$ STAF local LOG LOG GLOBAL LEVEL info LOGNAME temp MESSAGE info to global:temp
$ STAF local LOG LOG MACHINE LEVEL warning LOGNAME temp MESSAGE warning to machine:temp
$ STAF local LOG LOG HANDLE LEVEL error LOGNAME temp MESSAGE error to handle:temp
$ STAF local LOG LOG MACHINE LEVEL pass LOGNAME temp MESSAGE pass to machine:temp
$ STAF local LOG LOG GLOBAL LEVEL fail LOGNAME temp MESSAGE fail to global:temp
# Query what log files are present:
$ STAF local LOG LIST GLOBAL
Log Name Date-Time U-Size L-Size
-------- ----------------- ------ ------
temp 20090615-21:36:08 0 220
MyLog 20090611-01:18:11 0 824
$ STAF local LOG LIST MACHINES
beaker-lm1
$ STAF local LOG LIST MACHINE {STAF/Config/Machine}
Log Name Date-Time U-Size L-Size
-------- ----------------- ------ ------
temp 20090615-21:36:08 0 225
MyLog 20090529-15:46:15 0 103
$ STAF local LOG LIST MACHINE {STAF/Config/Machine} HANDLE 913
Log Name Date-Time U-Size L-Size
-------- ----------------- ------ ------
temp 20090615-21:36:07 0 111
# Query the logs just produced:
# ... GLOBAL
$ STAF local LOG QUERY GLOBAL LOGNAME temp
Date-Time Level Message
----------------- ----- -------------------
20090615-21:36:07 Info info to global:temp
20090615-21:36:08 Fail fail to global:temp
$ STAF local LOG QUERY GLOBAL LOGNAME temp LONG
R# Date-Time Machine H# Name User Endpoint Level Message
-- --------- ---------- --- -------- -------- ------------- ----- -------------
1 20090615- beaker-lm1 913 query-ex none://a local://local Info info to globa
21:36:07 ample.sh nonymous l:temp
2 20090615- beaker-lm1 913 query-ex none://a local://local Fail fail to globa
21:36:08 ample.sh nonymous l:temp
$ STAF local LOG QUERY GLOBAL LOGNAME temp STATS
Fatal : 0
Error : 0
Warning: 0
Info : 1
Trace : 0
Trace2 : 0
Trace3 : 0
Debug : 0
Debug2 : 0
Debug3 : 0
Start : 0
Stop : 0
Pass : 0
Fail : 1
Status : 0
User1 : 0
User2 : 0
User3 : 0
User4 : 0
User5 : 0
User6 : 0
User7 : 0
User8 : 0
# ... MACHINE and MACHINE/HANDLE:
$ STAF local LOG QUERY MACHINE {STAF/Config/Machine} LOGNAME temp LONG
R# Date-Time Machine H# Name User Endpoint Level Message
-- --------- ---------- --- -------- -------- ------------- ----- -------------
1 20090615- beaker-lm1 913 query-ex none://a local://local Warni warning to ma
21:36:07 ample.sh nonymous ng chine:temp
2 20090615- beaker-lm1 913 query-ex none://a local://local Pass pass to machi
21:36:08 ample.sh nonymous ne:temp
$ STAF local LOG QUERY MACHINE {STAF/Config/Machine} HANDLE 913 LOGNAME temp LONG
R# Date-Time Machine H# Name User Endpoint Level Message
-- --------- ---------- --- -------- -------- ------------- ----- -------------
1 20090615- beaker-lm1 913 query-ex none://a local://local Error error to hand
21:36:07 ample.sh nonymous le:temp
# The output might be fine for humans, but we would like something more machine-friendly:
$ export STAF_PRINT_MODE=Verbose
$ STAF local LOG LIST GLOBAL
[
{
Log Name : temp
Date-Time: 20090615-21:36:08
U-Size : 0
L-Size : 220
}
{
Log Name : MyLog
Date-Time: 20090611-01:18:11
U-Size : 0
L-Size : 824
}
]$ STAF local LOG QUERY GLOBAL LOGNAME temp
[
{
Date-Time: 20090615-21:36:07
Level : Info
Message : info to global:temp
}
{
Date-Time: 20090615-21:36:08
Level : Fail
Message : fail to global:temp
}
]
# It is much better, but still not the best one can get from STAF.
# Let's see what PySTAF has to offer:
# Following python file:
$ cat /tmp/tmp.KRgcJnmmSq
from PySTAF import *
import sys
try:
handle = STAFHandle("MyTest")
except STAFException, e:
print "ERROR: Error registering with STAF, RC: %d" % e.rc
sys.exit(e.rc)
def submit(*args):
print "calling handle.submit(*%s)" % repr(args)
result = handle.submit(*args)
if result.rc != STAFResult.Ok:
print 'ERROR: ... failed with RC=%s Result=%s' % (repr(result.rc), repr(result.result))
sys.exit(result.rc)
return result
print "# Query existing logs:"
result = submit('local', 'LOG', 'LIST GLOBAL')
print '... Returned repr(result.resultObj)): %s' % repr(result.resultObj)
print '... or, "pretty-printed", repr(result.resultContext)): %s' % repr(result.resultContext)
result = submit('local', 'LOG', 'QUERY GLOBAL LOGNAME temp')
print '... Returned repr(result.resultObj)): %s' % repr(result.resultObj)
print '... or, "pretty-printed", repr(result.resultContext)): %s' % repr(result.resultContext)
print "\n# Marshalling strucutred data:"
testObj = { 'data_type':'testInfo', 'test':'/examples/STAF/query-example', 'subtest':'/python', 'test_type':'temp', 'outputs':['/tmp/staf-query-example.out', '/var/log/messages'] }
print "marshall(%s)" % repr(testObj)
print "... will produce: %s" % repr(marshall(testObj))
print "\n# and storing and retrieving such a strucutred data (using STAF LOG service):"
result = submit('local', 'LOG', 'LOG GLOBAL LEVEL info LOGNAME py_temp MESSAGE %s' % marshall(testObj))
result = submit('local', 'LOG', 'QUERY GLOBAL LOGNAME py_temp')
print '... Retrieved following "raw" data from log: repr(result.result):\n%s' % repr(result.result)
mc = unmarshall(result.result)
print '... or "pretty-printed" as repr(unmarshall(result.result)) [ or simpler repr(result.resultContext) ]:\n%s' % mc
print '... or more Pythonic as repr(unmarshall(result.result).getRootObject()) [ or simpler repr(result.resultObj) ]:\n%s' % mc.getRootObject()
result = submit('local', 'LOG', 'QUERY GLOBAL LOGNAME py_temp LONG')
print '... produced:\n%s' % repr(result.resultObj)
print "\n# Python clean-up:"
result = submit('local', 'LOG', 'DELETE GLOBAL LOGNAME py_temp CONFIRM')
# will produce nice formatted output:
$ python /tmp/tmp.KRgcJnmmSq
/usr/local/staf/lib/PySTAF.py:9: RuntimeWarning: Python C API version mismatch for module PYSTAF: This Python has API version 1013, module PYSTAF has version 1011.
import PYSTAF
# Query existing logs:
calling handle.submit(*('local', 'LOG', 'LIST GLOBAL'))
... Returned repr(result.resultObj)): [{'staf-map-class-name': 'STAF/Service/Log/ListLogs', 'size': '220', 'logName': 'temp', 'upperSize': '0', 'timestamp': '20090615-21:36:08'}, {'staf-map-class-name': 'STAF/Service/Log/ListLogs', 'size': '824', 'logName': 'MyLog', 'upperSize': '0', 'timestamp': '20090611-01:18:11'}]
... or, "pretty-printed", repr(result.resultContext)): [
{
Log Name : temp
Date-Time: 20090615-21:36:08
U-Size : 0
L-Size : 220
}
{
Log Name : MyLog
Date-Time: 20090611-01:18:11
U-Size : 0
L-Size : 824
}
]
calling handle.submit(*('local', 'LOG', 'QUERY GLOBAL LOGNAME temp'))
... Returned repr(result.resultObj)): [{'staf-map-class-name': 'STAF/Service/Log/LogRecord', 'message': 'info to global:temp', 'level': 'Info', 'timestamp': '20090615-21:36:07'}, {'staf-map-class-name': 'STAF/Service/Log/LogRecord', 'message': 'fail to global:temp', 'level': 'Fail', 'timestamp': '20090615-21:36:08'}]
... or, "pretty-printed", repr(result.resultContext)): [
{
Date-Time: 20090615-21:36:07
Level : Info
Message : info to global:temp
}
{
Date-Time: 20090615-21:36:08
Level : Fail
Message : fail to global:temp
}
]
# Marshalling strucutred data:
marshall({'test': '/examples/STAF/query-example', 'subtest': '/python', 'test_type': 'temp', 'data_type': 'testInfo', 'outputs': ['/tmp/staf-query-example.out', '/var/log/messages']})
... will produce: '@SDT/{:216::4:test@SDT/$S:28:/examples/STAF/query-example:7:subtest@SDT/$S:7:/python:9:test_type@SDT/$S:4:temp:9:data_type@SDT/$S:8:testInfo:7:outputs@SDT/[2:66:@SDT/$S:27:/tmp/staf-query-example.out@SDT/$S:17:/var/log/messages'
# and storing and retrieving such a strucutred data (using STAF LOG service):
calling handle.submit(*('local', 'LOG', 'LOG GLOBAL LEVEL info LOGNAME py_temp MESSAGE @SDT/{:216::4:test@SDT/$S:28:/examples/STAF/query-example:7:subtest@SDT/$S:7:/python:9:test_type@SDT/$S:4:temp:9:data_type@SDT/$S:8:testInfo:7:outputs@SDT/[2:66:@SDT/$S:27:/tmp/staf-query-example.out@SDT/$S:17:/var/log/messages'))
calling handle.submit(*('local', 'LOG', 'QUERY GLOBAL LOGNAME py_temp'))
... Retrieved following "raw" data from log: repr(result.result):
'@SDT/*:675:@SDT/{:330::13:map-class-map@SDT/{:302::26:STAF/Service/Log/LogRecord@SDT/{:261::4:keys@SDT/[3:198:@SDT/{:60::12:display-name@SDT/$S:9:Date-Time:3:key@SDT/$S:9:timestamp@SDT/{:52::12:display-name@SDT/$S:5:Level:3:key@SDT/$S:5:level@SDT/{:56::12:display-name@SDT/$S:7:Message:3:key@SDT/$S:7:message:4:name@SDT/$S:26:STAF/Service/Log/LogRecord@SDT/[1:322:@SDT/%:311::26:STAF/Service/Log/LogRecord@SDT/$S:17:20090615-21:36:08@SDT/$S:4:Info@SDT/$S:227:@SDT/{:216::4:test@SDT/$S:28:/examples/STAF/query-example:7:subtest@SDT/$S:7:/python:9:test_type@SDT/$S:4:temp:9:data_type@SDT/$S:8:testInfo:7:outputs@SDT/[2:66:@SDT/$S:27:/tmp/staf-query-example.out@SDT/$S:17:/var/log/messages'
... or "pretty-printed" as repr(unmarshall(result.result)) [ or simpler repr(result.resultContext) ]:
[
{
Date-Time: 20090615-21:36:08
Level : Info
Message : {
test : /examples/STAF/query-example
subtest : /python
test_type: temp
data_type: testInfo
outputs : [
/tmp/staf-query-example.out
/var/log/messages
]
}
}
]
... or more Pythonic as repr(unmarshall(result.result).getRootObject()) [ or simpler repr(result.resultObj) ]:
[{'staf-map-class-name': 'STAF/Service/Log/LogRecord', 'message': {'test': '/examples/STAF/query-example', 'subtest': '/python', 'test_type': 'temp', 'data_type': 'testInfo', 'outputs': ['/tmp/staf-query-example.out', '/var/log/messages']}, 'level': 'Info', 'timestamp': '20090615-21:36:08'}]
calling handle.submit(*('local', 'LOG', 'QUERY GLOBAL LOGNAME py_temp LONG'))
... produced:
[{'staf-map-class-name': 'STAF/Service/Log/LogRecordLong', 'endpoint': 'local://local', 'handle': '914', 'level': 'Info', 'timestamp': '20090615-21:36:08', 'machine': 'beaker-lm1', 'user': 'none://anonymous', 'message': {'test': '/examples/STAF/query-example', 'subtest': '/python', 'test_type': 'temp', 'data_type': 'testInfo', 'outputs': ['/tmp/staf-query-example.out', '/var/log/messages']}, 'recordNumber': '1', 'handleName': 'MyTest'}]
# Python clean-up:
calling handle.submit(*('local', 'LOG', 'DELETE GLOBAL LOGNAME py_temp CONFIRM'))
# SH clean up:
$ STAF local LOG DELETE GLOBAL LOGNAME temp CONFIRM
$ STAF local LOG DELETE MACHINE {STAF/Config/Machine} LOGNAME temp CONFIRM
$ STAF local LOG DELETE MACHINE {STAF/Config/Machine} HANDLE 913 LOGNAME temp CONFIRM
$ STAF local HANDLE DELETE HANDLE 913
14 years, 9 months
beakerlib: rhts-report-result or report_result?
by Will Woods
The docs for beakerlib's rlReport function say:
Report test result using RHTS C<report_result> function.
But the code says:
rhts-report-result "$testname" "$result" "$logfile" "$score"
The files were moved in a way that makes it hard for me to figure out
the history from git, so I'm asking you all - which is right?
-w
14 years, 9 months
The Scheduler piece
by Bill Peck
Hello Everyone,
There is plenty more work to do on the inventory piece of beaker but I
wanted to share some of my thoughts on the scheduler to let everyone
know how I think things will work.
First a rough idea on how all these pieces will fit together:
Inventory Piece
-------------------------
- Contains systems!
- Keeps track what systems can install or more precisely what things a
system can't install (ie: this box doesn't install Fedora8 but anything
newer is good)
- Keeps track of what options are needed for installing said family
(Fedora 9 on this box needs noapic or nousbstorage)
- Keeps track of who has access to what. (This system is shared but
only to other people in the desktop group)
- Keeps track of who is currently using a system.
- Keeps a log of what was actions were performed (power cycled,
provisioned with Fedora-rawhide-20090601)
- Keeps a log of what config values were changed on the system (memory
was increased from 4096 to 8000)
Scheduler Piece
-------------------------
- Contains jobs!
- Jobs are just a container of related recipeSets.
- RecipeSets hold a collection of recipes that need to run at the same
time (multi-host)
- Recipes have a collection of tests that you want to run along with the
following:
- distroRequires: Requirements that allow the scheduler to pick a
distro (I want the latest i386 Fedora-rawhide)
- hostRequires: Requirements that allow the scheduler to pick a
system (This is first filtered by picking the distro, we only get
systems that are capable of installing that distro, Second we could have
requirements like Processor = Intel, memory greater than 4 gig, etc..)
- Tests are the actual test to run plus test parameters that you passed in.
Lab Controller Piece
---------------------------
- This is used only by Inventory.
- Cobbler is the heart of this piece
- Usually you have one of these per physical location (PXE tends to work
best local)
- Cobbler imports the distros and we tell Inventory about what we have.
The Harness
---------------------------
- This piece is still under design.
- Whatever this is it will be in charge of running the chosen tests on
the chosen system.
- The results should go back to the scheduler. But they may also go to
a Test Case Management system.
Ok, Now that the basics are out of the way I'm going to brain dump how
I think the scheduler should work. The current scheduler we use does
not scale well at all. The design I'm planning to implement will only
filter recipe requirements once and then loop on queued recipes and free
systems. If no systems are free then we won't see any of the queued
recipes. It should scale very well. There is quite a bit of
python/sqlalchemy code below. Its mostly pseudo code, but the ideas
should be sound.
States
--------
New <- Recipes start in this state
Queued <- After initial filtering happens (what systems match)
Assigned <- The Recipe has been assigned a machine
Running <- The Recipe is actually running on the system
Completed <- Were done.
InComplete <- Were done but didn't finish.
The big change and the piece that will make this all work is I'm going
to use a cache table. Its really a mapping table between systems and
recipes.
queue_cache Table
system_id, recipe_id
Recipes go in as New
We'll have the following 4 threads on the Server
New_process:
for recipe in
Recipe.query().filter(Recipe.status==TestStatus.by_name(u'New')):
# Figure out the distro requested.
distro = Distro.process_requires(requires)
if not distro:
# No distro matches so abort the whole recipeSet.
recipe.recipeset.action_abort('No distro matches for recipe %s'
% recipe.id)
break
# Filter systems based on selected distro + recipe requirements.
for system in distro.systems.process_requires(requires):
# Don't add the same host twice to the same recipeSet. A
machine can't be in two places at once.
for peer_recipe in recipe.recipeset.recipes:
if system in peer_recipe.systems:
break
else:
# populate queue_cache table
recipe.possible_systems.append(system)
if recipe.possible_systems:
# There should only ever be one thread/process moving recipes from
# New to Queued.
recipe.status = TestStatus.by_name(u'Queued')
else:
# Can't schedule, abort the whole recipeSet.
recipe.recipeset.action_abort('No systems match for recipe %s'
% recipe.id)
Queued_process:
# Get a list of all recipes that are queued and have systems that are
free.
for recipe in
Recipe.query().join('status').join('possible_systems').filter(and_(Recipe.status==TestStatus.by_name(u'Queued'),System.user==None)):
#Pick the first free system
system = recipe.free_systems.first()
if system:
# Atomic operation to put recipe in Assigned state
if session.connection(Recipe).execute(recipe_table.update(
and_(recipe_table.c.id==recipe.id,
recipe_table.c.status_id==TestStatus.by_name(u'Queued').id),
status_id=TestStatus.by_name(u'Assigned').id).rowcount == 1:
# Atomic operation to reserve the system
if session.connection(System).execute(system_table.update(
and(system_table.c.id=system.id,
system_table.c.user_id==None)),
user_id=recipe.recipeset.job.owner.user_id).rowcount != 1:
# The system was taken from underneath us. Put recipe
# back into queued state and try again.
recipe.status = TestStatus.by_name(u'Queued')
else:
pass
#Some other thread beat us. skip this recipe now.
# Depending on scheduler load it should be safe to run multiple
# Queued processes.. Also, systems that we don't directly
# control, for example, systems at a remote location that can
# pull jobs but not have any pushed onto them. These systems
# could take a recipe and put it in running state. Not sure how
# to deal with multi-host jobs at remote locations. May
need to
# enforce single recipes for remote execution.
Assigned_process:
for recipeSet in
RecipeSet.query().filter(Recipe.status==TestStatus.by_name(u'Assigned'):
# All recipes in a recipeSet will be in Assigned state.
# Figure out every recipes role in the recipeSet
recipeSet.schedule()
# Clear recipe.systems = []
Abort_process:
# A recipe could initially have systems that match but if those systems
# are removed or marked broken then a recipe could stay in the queued
# state foreever. This will clear them out. We could add some date
# criteria if we don't want recipes to abort right away. Maybe a
# replacement system will come back online shortly? At least the user
# will be able to see that no systems are listed and understand why
their
# recipe is not running.
for recipe in
Recipe.query().filter(Recipe.status==TestStatus.by_name(u'Queued')):
if not recipe.systems:
recipe.recipeset.action_abort('No systems match for recipe
%s' % recipe.id)
Adding a new host:
for recipe in
Recipes.query().filter(Recipe.status==TestStatus.by_name(u'Queued')):
# Do I match for added system?
# Was I already added to recipeSet for this recipe? If not continue
recipe.possible_systems.append(system)
Removing a host:
# Clear any potential recipes from this system.
system.recipes = []
Changing any of the following would force a remove/add on a host:
Change owner.
Loan the system to someone.
Change the groups.
Change to Key/Values or to details, basically any change that would
affect scheduling.
Any thoughts? Suggestions? I'll be branching current beaker into a 0.4
branch so that I can make bug fix releases if need be.
14 years, 9 months