----- Original Message -----
From: "Nir Soffer" <nsoffer(a)redhat.com>
To: "Francesco Romani" <fromani(a)redhat.com>
Cc: "vdsm-devel" <vdsm-devel(a)lists.fedorahosted.org>
Sent: Thursday, March 20, 2014 8:21:47 PM
Subject: Re: [vdsm] VDSM profiling results, round 1
Thanks for your feedback!
> - profile data is now collected using the profile decorator
(see
>
http://www.ovirt.org/Profiling_Vdsm)
> just around Vm._startUnderlyingVm.
Profiling one function is misleading without the big picture. You should
profile the whole application before drilling down and checking one function.
I agree. Do you recommend a particular tool or approach?
May this one be a good shot?
http://code.google.com/p/yappi/
> The gathering profile was obtained using
> the yappi application-wide
I don't see such profile - can you put it somewhere?
Yes.
http://paste.fedoraproject.org/87294/
This was done using the yappi tool as said above (was actually one of the earliest done).
Please note the calls are ordered by cumtime.
Will run again profiles if turns out this is a better way.
> I am still focusing just on the "monday morning"
scenario (mass start of
> many
> VMs at the same time).
> Each run consisted in a parallel start of 32 VMs as described in the result
> data.
> VDSM was restarted between one run and the another.
> engine was *NOT* restarted between runs.
> individual profiles have been gathered after all the runs and the profile
> was
> extracted from the aggregation of them.
How did you aggregate the profiles?
I just stuffed them inside a pstats.Stats object like this:
s = pstats.Stats(glob.glob('./vdsm_prof*'))
There is a better way to do this?
> baseline :
http://paste.fedoraproject.org/86318/
> namedtuple fix:
http://paste.fedoraproject.org/86378/
> pickle fix :
http://paste.fedoraproject.org/86600/ (see below)
Please limit the number of calls in print_stats to 20 or 30. Showing
everything
just make it harder to compare, and leads to optimizing unimportant stuff. We
want to optimize only the hotspots.
Will stick to 32 in any future showing.
> 1. internal concurrency (possible patch:
http://gerrit.ovirt.org/#/c/25857/
> -
> see below)
If you also add print_callers(30) to the profiles, we may learn more about
these wait calls.
Will do
> 2. libvirt
This is the main issue - we should invert time in this.
Agreed
Is one of the things I want to try, the alternative on the plate is lxml
(
http://lxml.de/)
cElementTree will be tried first.
> 4. namedtuple (patch:
http://gerrit.ovirt.org/#/c/25678/ -
fixed, merged)
> 5. pickling (patch:
http://gerrit.ovirt.org/#/c/25860/ - see below)
These are much more expensive then pickle:
452311 39.222 0.000 41.883 0.000
/usr/lib64/python2.6/StringIO.py:208(write)
7299 38.946 0.005 38.946 0.005 {built-in method acquire}
StringIO should be easily replaced with cStringIO (assuming that we write
only byte arrays - no unicode).
You are right. I Will investigate about this
acquire - probably will be hard to fix - showing callers may be
interesting.
Noted. Will investigate more in the future rounds.
> Looking at the number of calls, it is clear that you are not
comparing the same thing:
You are right. Good finding!
This one is somehow slipped under my eyes. Sorry for the noise and for not have
noticed this before.
Will improve also the aggregation/verification of the profile data to not let
this happen again.
Note: 157 calls to libvirtmod.virDomainCreateXML in first profile vs
96
in the second. 700 calls to virDomainGetXMLDesc in the first vs 384 in
the second.
You should start the profiler before you start the test, and stop it when
the test is done. The approach of profiling one function run concurrently
on different threads and aggregating the profiles seems to give wrong
results.
I would use the profiler to find the hotspots and compare the performance
by timing the whole operation without the profiler.
Good point. I just need to find the best tool for that.
You should do multiple runs (at least 10) and report the average and
variance
for the whole operation.
This is already in the workings. I want to fully automate the benchs so I can
run more of them and have more reliable results.
This is what I'm brewing:
https://gist.github.com/mojaves/9641520
(full repo is coming soon)
I'm adding another simple metric for the comparison, which will be the total
elapsed time which a VM take to be reported as 'up' since the time the
'create'
request is issued; this is the same metric I used for the graphs I presented
at the gathering.
But this will be fully automated and reported by benchmark script
(previously I added data traces in the logs and parsed the log files)
Let me summarize the next steps (not in strict priority order)
* find a reliable application-wide profiling approach. Profile the whole VDSM,
not just the specific function (_startUnderlyingVm now, something else in future runs)
* make the profiling more consistent and reliable;
do more runs (at least 10, the more the better); add the variance?
* limit the profiling reports to the hotspots (the 32 most expensive calls)
* show the callers to learn more about the wait calls
* investigate where and why we spend time in StringIO
* points already open:
- re-verify the impact of cPickle, our BoundedSemaphore(s), XML processing
Anyone feel free to point out if I am missing something.
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani