[Beaker-devel] New Beaker API quirks

Don Zickus dzickus at redhat.com
Tue Apr 2 13:53:56 UTC 2013


On Tue, Apr 02, 2013 at 06:07:15PM +1000, Dan Callaghan wrote:
> Thanks for your feedback, Don! I'm glad to see we already have someone 
> helping us to find all the holes in the new API :-)
> 
> Excerpts from Don Zickus's message of 2013-03-29 02:02:47 +1000:
> > Hi Dan,
> > 
> > I have been enjoying the ease to code against the new Beaker API.  The
> > GET, POST, PUSH stuff keeps things really simple.  But during my testing I
> > noticed a few quirks that Bill P. suggested I email you about for your
> > opinion.
> > 
> > - When I POST/PUT to a url, if the url has an extra '/' in the path, like
> > 
> >   /recipes/(recipe_id)/tasks/(task_id)//status
> > 
> > the command gets rejected.  Not sure if that is an http thing or a server
> > thing.  Not to much of a problem, just forces me to clean up my code
> > (which is good!).  Just thought I would mention it.
> 
> Hmm I guess Flask isn't normalizing slashes for us, which is fair 
> enough. In HTTP those are two different URLs (//status and /status) even 
> though most web servers would then map them to the same filesystem path. 
> We could make beaker-proxy could do the normalization pretty easily.

Up to you.  Forced me to clean up my code. :-)

> 
> > - when I PUT a zero length file to the server, it fails with a 400 code.
> >   Now I should be filtering those (and will now).  But it did take me an
> > hour before I realized that of my list of files, the rejected ones were of
> > zero length.  Again could be on purpose, not sure.
> 
> This definitely seems like a bug, I will investigate more this week. 
> There is no reason why we should reject zero-length log files.
> 
> Were you passing a Content-Range header? Do you happen to know what was 
> in the response body from beaker-proxy (it should have an error 
> message)?

Originally I wasn't, but now I implemented uploading in chunks, which uses
it.  It doesn't matter, putting zeros for Content-Range and Content-Length
produces the same thing. So using urllib2.urlopen(), I have the following
exception handler:

    except urllib2.HTTPError as e:
        elif e.code == 400 and use_put:
            log.error("Error(%s) failed to upload file %s" % (e.code, dest))
            print "e is %s" %e

...
e is HTTP Error 400: BAD REQUEST
 
> 
> > - Because the server does not have a mechanism to 'push' and abort the
> >   client (other than power off), the client could still be running and
> > trying to pushing results back to the server.
> > 
> >   a POST of a task_result gave me an 'Internal Server Error 500'.  Will
> > that be the expected response?  I can trap for that and abort the client.
> > 
> >   PUT of logs is still allowed even though the task aborted on the server
> > side.  Bill thought that should be fixed.  Though I am not sure what the
> > expectations are.
> 
> Hmmm I think beah behaves the same way here. That is, it keeps on 
> running happily even after the recipe has been cancelled, because there 
> is nothing to stop it (until Beaker eventually pulls the plug). It's not 
> necessarily a bad thing. But it might make sense to have some way for 
> Beaker to tell the harness to stop running. I will think about it.

I was thinking a couple of things.  Currently, I can maliciously upload
logs to a cancelled job and suck up disk space.  Probably bad.  The other,
my code can easily trap for a specific exception which would indicate the
recipe is done.  I can then self abort.  Right now I use ISE 500 (which
probably isn't the best).  Up to you.

> 
> If you got a 500 response that is definitely a bug regardless. Do you 
> have a stack trace from /var/log/beaker/server-errors.log for it? Or if 
> it was on beaker-devel, can you tell me roughly the timestamp and I will 
> look in the logs?

I was trying to update the task result for 'Aborted' (reservesys timed out) recipe 51:

2013-04-02 09:29:53,297 bkr.server.xmlrpccontroller ERROR Error handling
XML-RPC method
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/bkr/server/xmlrpccontroller.py", line 54, in RPC2
    response = self.process_rpc(method,params)
  File "/usr/lib/python2.6/site-packages/bkr/server/xmlrpccontroller.py", line 43, in process_rpc
    response = obj(*params)
  File "<string>", line 3, in result
  File "/usr/lib/python2.6/site-packages/turbogears/identity/conditions.py", line 249, in require
    return fn(self, *args, **kwargs)
  File "/usr/lib/python2.6/site-packages/bkr/server/recipetasks.py", line 171, in result
    return getattr(task,result_type)(**kwargs)
  File "/usr/lib/python2.6/site-packages/bkr/server/model.py", line 5868, in warn
    return self._result(TaskResult.warn, path, score, summary)
  File "/usr/lib/python2.6/site-packages/bkr/server/model.py", line 5884, in _result
    raise BX(_('No watchdog exists for recipe %s' % self.recipe.id))
BX: 'No watchdog exists for recipe 51'
 
> 
> Actually I'm guessing it was because you were trying to record a result 
> against a finished task. I never considered that possibility originally. 
> It should probably give a 409 response instead. I will think about it 
> some more.

Yeah, I was goofing around.  The new API makes it really easy to goof
around. :-)  Basically I tell my harness which recipe to download and it
will parse and re-run the recipe locally while uploading all new results.
It should do it offline, but Bill and I were wondering what happens if we
actually tried to push the results to the beaker server (trivial to do
with my code and new the API).

Thanks!

Cheers,
Don


More information about the Beaker-devel mailing list