Modeling graphics framebuffer device in VDSM
by fkobzik@redhat.com
Dear VDSM devels,
I've been working on refactoring graphics devices in engine and VDSM for some
time now and I'd like know your opinion of that.
The aim of this refactoring is to model graphics framebuffer (SPICE, VNC) as
device in the engine and VDSM. This which is quite natural since libvirt treats
graphics as a device and we have some kind of devices infrastructure in both
projects. Another advantage (and actually the main reason for refactoring) is
simplified support for multiple graphics framebuffers on a single vm.
Currently, passing information about graphics from engine to VDSM is done via
'display' param in conf. In the other direction VDSM informs the engine about
graphics parameters ('displayPort', 'displaySecurePort', 'displayIp' and
'displayNetwork') in conf as well.
What I'd like to achieve is to encapsulate all this information in specParams
of the new graphics device and use specParams as a place for transfering data
about graphics device between engine and vdsm. What do you think?
the draft patch is here:
http://gerrit.ovirt.org/#/c/23555/ (it's currently marked with '-1' but it puts
some light on what the solution looks like so feel free to take a look).
Thanks,
Franta.
10 years, 2 months
Re: [vdsm] FW: Fwd: Question about MOM
by alitke@redhat.com
On 26/03/14 03:50 -0700, Chegu Vinod wrote:
><removing the email alias>
Restoring the email alias. Please keep discussions as public as
possible to allow others to contribute to the design and planning.
>
>Jason.
>
>Please see below...
>
>
>On 3/26/2014 1:38 AM, Liao, Chuan (Jason Liao, HPservers-Core-OE-PSC) wrote:
>>Hi All,
>>
>>Follow below discussion. I got these points:
>>1. MOM gathering NUMA information(topology, statistics...) will changed in future. (one side using VDSM API, another side using libvirt and system API)
>
>I didn't follow your sentence..
>
>Pl.. work with Adam/Martin and provide the needful API's on the VDSM
>side ...so that MOM entity thread can use the API and extract the
>needful about NUMA topology and cpu/memory usage info. As I see
>it...this is probably the only piece that would be relevant to be made
>available at the earliest (preferably in oVirt 3.5) and that would
>enable MOM to pursue next steps as they say fit.
>
>Beyond that ...at this point (for oVirt 3.5) let us not spend more
>time on MOM internals please. Let us leave that to Adam and Martin to
>pursue this as/when they see fit.
>
>>2. Martin and Adam will take a look at MOM policy in ovirt scheduler when NUMA feature turn on.
>Yes please.
>>3. ovirt engine will have numa-aware placement algorithm to make the VM run within NUMA nodes as best way.
>
>"algorithm" here is decided by user specified pinning requests
>(and/or) by the oVirt scheduler. In the case of user request (upon
>approval from oVirt scheduler) the VDSM-> libvirt will be explicitly
>told what to do via numatune/cputune etc etc. In the absence of the
>user specified pinning request I don't know if oVirt scheduler intends
>to convey the numatune/cputune type of requests to the libvirt...
>
>>4. ovirt engine will have some algorithm to automatic configure virtual NUMA when big VM creation (big memory or vcpus)
>
>This is a good suggestion but in my view should be taken up after
>oVirt 3.5.
>For now just accept and process the user specified requests...
>>5. Investigate on KSM, memory ballooning have the right tuning parameter when NUMA feature turn on.
>That is for Adam/Martin et.al. ...not for your specific project.
>
>We just need to ensure that they have the basic NUMA info, they need
>(via the VDSM API i mentioned above)...so that it enables them to work
>on their part independently as/when they see fit.
>
>>6. Investigate on if Automatic NUMA balancing is keeping the process reasonably balanced and notify ovirt engine.
>Not sure I follow what you are saying...
>
>Here is what I have in my mind :
>
>Check if the target host has Automatic NUMA balancing enabled (you can
>use the sysctl -a |grep numa_balancing or a similar underlying
>mechanism for determining this). If its present then check if its
>enabled or not (value of 1 is enabled and 0 is disabled)... and convey
>this information to the oVirt engine GUI for display (this is a hint
>for a user (if they wish) to skip manual pinning).. This in my view
>is the minimum...at this point (and it would be great if we can make
>it happen for oVirt 3.5).
I think since we have vdsm you can choose to enable autonuma always
(when it is present). Are there any drawbacks to enabling it always?
>
>We can discuss (at some later point i.e for post oVirt 3.5) whether we
>should really provide a way to the user to disable Automatic NUMA
>balancing. Changing the other numa balancing tunables is just not
>going to happen...as far as I can see at this point (so let us not
>worry about that right now..)
>
>
>>7. Investigate on libvirt have any NUMA tuning APIs
>No. There is nothing to investigate here..
>
>IMO. libvirt should not be playing with the host wide NUMA settings.
>
>
>
>
>>
>>Please feel free to correct me if I am missing something.
>
>See above
>>
>>BTW. I think there is no point in ovirt 3.5 release, am I right?
>
>If you are referring to just the MOM stuff then with the exception of
>my comment about having an appropriate API on the VDSM for enabling
>MOM there is nothing else.
>
>Vinod
>
>>
>>Best Regards,
>>Jason Liao
>>
>>-----Original Message-----
>>From: Vinod, Chegu
>>Sent: 2014年3月21日 21:32
>>To: Adam Litke
>>Cc: Liao, Chuan (Jason Liao, HPservers-Core-OE-PSC); vdsm-devel; Martin Sivak; Gilad Chaplik; Liang, Shang-Chun (David Liang, HPservers-Core-OE-PSC); Shi, Xiao-Lei (Bruce, HP Servers-PSC-CQ); Doron Fediuck
>>Subject: Re: FW: Fwd: Question about MOM
>>
>>On 3/21/2014 6:13 AM, Adam Litke wrote:
>>>On 20/03/14 18:03 -0700, Chegu Vinod wrote:
>>>>On 3/19/2014 11:01 PM, Liao, Chuan (Jason Liao,
>>>>HPservers-Core-OE-PSC) wrote:
>>>>>Add Vinod in this thread.
>>>>>
>>>>>Best Regards, Jason Liao
>>>>>
>>>>>-----Original Message----- From: Adam Litke
>>>>>[mailto:alitke@redhat.com] Sent: 2014年3月19日 21:23 To: Doron Fediuck
>>>>>Cc: vdsm-devel; Liao, Chuan (Jason Liao, HPservers-Core-OE-PSC);
>>>>>Martin Sivak; Gilad Chaplik; Liang, Shang-Chun (David Liang,
>>>>>HPservers-Core-OE-PSC); Shi, Xiao-Lei (Bruce, HP Servers-PSC-CQ)
>>>>>Subject: Re: Fwd: Question about MOM
>>>>>
>>>>>On 19/03/14 05:50 -0400, Doron Fediuck wrote:
>>>>>>Moving this to the vdsm list.
>>>>>>
>>>>>>----- Forwarded Message ----- From: "Chuan Liao (Jason Liao,
>>>>>>HPservers-Core-OE-PSC)" <chuan.liao(a)hp.com> To: "Martin Sivak"
>>>>>><msivak(a)redhat.com>, alitke(a)redhat.com, "Doron Fediuck"
>>>>>><dfediuck(a)redhat.com>, "Gilad Chaplik" <gchaplik(a)redhat.com> Cc:
>>>>>>"Shang-Chun Liang (David Liang, HPservers-Core-OE-PSC)"
>>>>>><shangchun.liang(a)hp.com>, "Xiao-Lei Shi (Bruce, HP Servers-PSC-CQ)"
>>>>>><xiao-lei.shi(a)hp.com> Sent: Wednesday, March 19,
>>>>>>2014 11:28:01 AM Subject: Question about MOM
>>>>>>
>>>>>>Hi All,
>>>>>>
>>>>>>I am a new with MOM feature.
>>>>>>
>>>>>>In my understanding, MOM is the collector both from host and guest
>>>>>>and set the right policy to KSM and memory ballooning get better
>>>>>>performance.
>>>>>Yes this is correct. In oVirt, MOM runs as another vdsm thread and
>>>>>uses the vdsm API to collect host and guest statistics. Those
>>>>>statistics are fed into a policy file which can create some outputs
>>>>>(such as ksm tuning parameters and guest balloon sizes). MOM then
>>>>>uses the vdsm API to apply those outputs to the system.
>>>>
>>>>Ok..Understood about the statistics gathering part and then
>>>>initiating policy driven inputs for the ksm and balloning on the host
>>>>etc.
>>>>
>>>>Perhaps this was already discussed earlier ? Does the MOM thread in
>>>>vdsm intend to gather the NUMA topology of the host from the VDSM
>>>>(using some new TBD or some enhanced existing API) or does it intend
>>>>to collect this directly from the host using libvirt/libnuma etc ?
>>>When MOM is using the VDSM HypervisorInterface, it must get all of its
>>>information from vdsm. It is considered an API layering violation for
>>>MOM to access the system or libvirt connection directly. When running
>>>with the Libvirt HypervisorInterface, it should use libvirt and the
>>>system directly as necessary. Your new features should consider this
>>>and make use of the HypervisorInterface abstraction to provide both
>>>implementations.
>>>
>>Thanks for clarifying. (please include your comment about this in Jason's design document that you may have seen)
>>
>>>>>>I am not sure how it has relationship with NUMA, does anyone can
>>>>>>explain it to me?
>>>>Jason, Here is my understanding (and I believe I am just
>>>>paraphrasing/echoing Adam's comments ).
>>>>
>>>>MOM's NUMA related enhancements are independent of what the oVirt
>>>>UI/oVirt scheduler does.
>>>>
>>>>It is likely that MOM's vdsm thread may choose to extract information
>>>>about NUMA topology (includes dynamic stuff like cpu usage or free
>>>>memory) from the VDSM (i.e. if they choose to not get it directly
>>>>from libvirt/libnuma or /proc etc).
>>>>
>>>>How MOM interprets that NUMA information along with other statistics
>>>>that it gathers (along side with user requested SLA requirements for
>>>>each guest etc) should be left to MOM to decide and direct
>>>>KSM/ballooning related actions. I don't believe we need to intervene
>>>>in the MOM related internals.
>>>Once we decide to have NUMA-aware MOM policies there will need to be
>>>some infrastructure enhancements to enable it. I think Martin and I
>>>will take the lead on it since we have been thinking about these kinds
>>>of issues for some time now.
>>Ok.
>>
>>>>>I guess we need to start by examining the currently planned use
>>>>>cases. Please feel free to correct me if I am missing something or
>>>>>over-simplifying something: 1) NUMA-aware placement - Try to
>>>>>schedule VMs to run on hosts where the guest will not have to span
>>>>>multiple NUMA nodes.
>>>>I guess you are referring to the case where the user (and/or the
>>>>oVirt scheduler) has not explicitly directed libvirt on the host to
>>>>schedule the VM in some specific way... In those cases the decision
>>>>is left to the smarts of the host OS scheduler to take care of it
>>>>(that includes the future/smarter Automatic NUMA balancing enabled
>>>>scheduler).
>>>Yes. For this one, we need a numa-aware placement algorithm on
>>>engine, and the autonuma feature available and configured on all virt
>>>hosts. In the first phase I don't anticipate any changes to MOM
>>>internals. I would prefer to observe the performance characteristics
>>>of this and tweak MOM in the future to address actual performance
>>>problems we see.
>>Ok.
>>
>>>>> 2) Virtual NUMA topology - Emulate a NUMA topology inside the VM.
>>>>Yes. Irrespective of any NUMA specified for the backing resources of
>>>>a guest...when the guest size increases it is a "required" practice
>>>>to have virtual NUMA topology enabled. (This helps the OS running
>>>>inside the guest to scale/perform much by making NUMA aware decisions
>>>>etc. Also it helps the applications running in the OS to
>>>>scale/perform better).
>>>Agreed. One point I might make then... Should the VM creation process
>>>on engine automatically configure virtual NUMA (even if the user
>>>doesn't select it) once a guest reaches a certain memory size?
>>
>>Good point. and yes we have thought about it a little bit... (btw, Its not just the memory size but the # vcpus too ).
>>Perhaps mimic the host topology etc..but there could be some issues...so we wanted to defer this for a future oVirt version. (BTW, We are aware of at least one other competing hypervisor management tool that does this automatically)
>>
>>>>>These two use cases are intertwined because VMs with NUMA can be
>>>>>scheduled with more flexibility (albeit with more sophistication)
>>>>>since the scheduler can fit the VM onto hosts where the memory can
>>>>>be split across multiple Host NUMA nodes.
>>>>>
>>>>> 3) Manual NUMA pinning - Allow advanced admins to schedule a VM
>>>>> to run on a specific host with a manual pinning strategy.
>>>>Yes
>>>>
>>>>>Most of these use cases involve the engine scheduler and engine UI.
>>>>Correct.
>>>>
>>>>>There is not much for MOM to do to support their direct
>>>>>implementation. We should focus on managing interactions with other
>>>>>SLA features that MOM does implement: - How should KSM be adjusted
>>>>>when NUMA is in effect? In a NUMA host, are there numa-aware KSM
>>>>>tunables that we should use? - When ballooning VMs, should we take
>>>>>into account how much memory we need to reclaim from VMs on a node
>>>>>by node basis?
>>>>If MOM had the NUMA topology information of the host I believe it
>>>>should be able to determine where the guest related processes are
>>>>currently running on the host (irrespective of how those guests ended
>>>>up there etc). MOM can then use all the relevant information (NUMA
>>>>topology, statistics, SLAs etc etc). to decide and direct KSM and
>>>>ballooning in a NUMA friendly way...
>>>Yes, exactly. For example, only run ksm on nodes where there is
>>>memory pressure and only balloon guests whose memory resides on nodes
>>>with a memory shortage.
>>That's correct..
>>
>>>>>Lastly, let's see if MOM needs to manage the existing NUMA utilities
>>>>>in place on the system. I don't know much about AutoNUMA. Does it
>>>>>have tunables that should be adjusted or is it completely
>>>>>autonomous?
>>>>For the most part its automated (that's the whole point of being
>>>>Automatic...although the technology will mature in phases :)) ...but
>>>>if someone really really needs it to be disabled the can do so.
>>>>
>>>>There are certainly some NUMA related tunables in the kernel today
>>>>(as shown below)....but at this point I am not very sure about the
>>>>specific scenarios where one would really need to change these
>>>>default settings. (As we do more studies of various use cases on
>>>>different platforms and workload sizes etc there may be a need...but
>>>>at this point I don't see MOM necessarily getting involved in these
>>>>settings. Does MOM change other kernel tunables today ? ).
>>>>
>>>>
>>>># sysctl -a |grep numa kernel.numa_balancing = 1
>>>>kernel.numa_balancing_scan_delay_ms = 1000
>>>>kernel.numa_balancing_scan_period_max_ms = 60000
>>>>kernel.numa_balancing_scan_period_min_ms = 1000
>>>>kernel.numa_balancing_scan_size_mb = 256
>>>>kernel.numa_balancing_settle_count = 4 vm.numa_zonelist_order =
>>>>default
>>>These remind me of the KSM tunables. Maybe some day we will be clever
>>>enough to tune them but you're right, it should not be our first
>>>priority. One idea I have for MOM is that it could check up on
>>>autonuma by checking /proc/<pid>/numa_maps for each qemu process on
>>>the host and seeing if autonuma is keeping the process reasonably
>>>balanced. If not, we could actually raise an alarm so that
>>>vdsm/engine would try and migrate a VM away from this host if
>>>possible. Once that is done, autonuma might be able to make better
>>>progress. This is really just a research level idea at the moment.
>>Ok. I agree that this can be deferred to a later phase (based on further
>>investigation)
>>>>>Does libvirt have any NUMA tuning APIs that MOM may want to call to
>>>>>enhance performance in certain situations?
>>>>I am no expert on libvirt's philosophy/goals etc. and have always
>>>>viewed libvirt as providing APIs for provisioning/controlling the
>>>>individual guests either on the local or in some cases remote
>>>>hosts....but not changing the host wide parameters/tunables itself. I
>>>>shall let libvirt experts comment if that is not the case...
>>>>
>>>>If we do identify valid use cases where NUMA related tunables need to
>>>>be changed then MOM can use mechanisms similar to sysctl etc. to
>>>>change them... but I am yet to envision such a scenario (beyond the
>>>>rare use cases where oVirt upon user request may choose to entirely
>>>>disable automatic NUMA balancing feature on a given host)
>>>>
>>>>Hope that makes some sense... Thanks Vinod
>>>Fair enough. You're right that it doesn't want to handle policy, but
>>>in some cases it provides APIs that allow a management system to tune
>>>things. For example: CPU pinning, IO/Net throttling, CPU shares,
>>>balloon.
>>Yes... however the above examples are still falling in the category of
>>managing guests
>>and not the host itself :) But I get your point...
>>
>>Thanks
>>Vinod
>>
>>
>>>>>One of the main questions I ask when trying to decide if MOM should
>>>>>manage a particular setting is: "Is this something that is set once
>>>>>and stays the same or is it something that must change dynamically
>>>>>in accordance with current system conditions?" In the former case,
>>>>>it is probably best managed by engine or vdsm directly. In the
>>>>>latter case, it fits the MOM model.
>>>>>
>>>>>Hope this was helpful! Please feel free to continue engaging this
>>>>>list with any additional questions that you might have.
>>>>>
>>>>>>On engine side, there is only one button with this feature: Sync
>>>>>>MoM Policy, right?
>>>>>>
>>>>>>On vdsm side, I saw the momIF is working for this, right?
>>>>>>
>>>>>>Best Regards, Jason Liao
>>>>>>
>>>>>-- Adam Litke
>>>>>
>>>>>[Jason] +Martin's part Hi,
>>>>>
>>>>>>In my understanding, MOM is the collector both from host and guest
>>>>>>and set the right policy to KSM and memory ballooning get better
>>>>>>performance.
>>>>>Correct. MoM controls the Guest memory allocations using KSM and
>>>>>ballooning and allows overcommitment to work this way. I does not
>>>>>really set the policy thought, it contains the policy and uses it to
>>>>>dynamically update the memory space available for VMs.
>>>>>
>>>>>>I am not sure how it has relationship with NUMA, does anyone can
>>>>>>explain it to me?
>>>>>In theory MoM might be able to play with ballooning on per node
>>>>>basis.
>>>>>
>>>>>Without NUMA information it would free memory somewhere on the host,
>>>>>but that memory might be too slow to access because it won't be
>>>>>localized on nearby nodes.
>>>>>
>>>>>With NUMA information MoM will know which VMs can be ballooned so
>>>>>the newly released memory segments are a bit more closer to each
>>>>>other.
>>>>>
>>>>>>On engine side, there is only one button with this feature: Sync
>>>>>>MoM Policy, right?
>>>>>There is also Balloon device checkbox in the Edit VM dialog and
>>>>>Enable ballooning on the Edit Cluster dialog.
>>>>>
>>>>>>On vdsm side, I saw the momIF is working for this, right?
>>>>>Yes, momIF is responsible for the MoM specific communication and for
>>>>>creating the policy file with parameters.
>>>>>
>>>>>MoM also uses standard VDSM APIs to get other information and you
>>>>>can see that in MoM's source code in hypervisor_interfaces/vdsm
>>>>>(that interface is then used by collectors).
>>>>>
>>>>>Regards
>>>>>
>>>>>-- Martin Sivak msivak(a)redhat.com
>
--
Adam Litke
10 years, 2 months
Addming whole application profile
by Nir Soffer
Hi all,
Please review this patch:
http://gerrit.ovirt.org/26113
The short term goal of this patch is to allow debugging of this nasty bug:
https://bugzilla.redhat.com/1074097
The long term goal is having an easy way to profile vdsm in the field and
during development, initiative started by Francesco.
Here are some docs to help you get started with the profiler.
Installing yappi
----------------
To try this patch, you must install yappi:
1. wget http://yappi.googlecode.com/files/yappi-0.82.tar.gz
2. tar xzf yappi-0.82.tar.gz
3. cd yappi-0.82
4. sudo python setup.py install
If you want to build an rpm, the quickest way is:
1. Fix MANIFEST.in so it looks like this:
include *.h
include ez_setup.py
2. Build rpm
$ python setup.py bdist_rpm
You rpm is located in dist.
I built rpm packages for fedora and rhel (looking for a place to share them)
Using the profiler
------------------
Enbale the profiler in vdsm.conf:
[vars]
profile_enable = False
If the profiling is enabled, profiling stats early in vdsm startup,
and stopped when vdsm receives a termination signal. The profile is saved
to /tmp/vdsmd.prof.
IMPORTANT: After you are done, disable profiling.
To explore profile, open it in the pstat commnad shell:
$ python -m pstats /tmp/vdsmd.prof
To view top 30 expensive functions:
% sort time
% stats 30
To view top 30 functions including time spent in called functions:
% sort cumulative
% stats 30
To view callers (who is calling foo() 300,000 times?)
% callers 30
Enjoy,
Nir
10 years, 2 months
ovirt-guest-agent on debian
by Sven Kieske
Hi,
I just did some additional research, here are my results:
on a clean debian 7 x64 VM inside oVirt:
I did as root:
apt-get install python-software-properties
add-apt-repository -y ppa:zhshzhou/vdsm-ubuntu
vim /etc/apt/sources.list.d/zhshzhou-vdsm-ubuntu-wheezy.list
change the following lines from:
deb http://ppa.launchpad.net/zhshzhou/vdsm-ubuntu/ubuntu wheezy main
deb-src http://ppa.launchpad.net/zhshzhou/vdsm-ubuntu/ubuntu wheezy main
to:
deb http://ppa.launchpad.net/zhshzhou/vdsm-ubuntu/ubuntu precise main
deb-src http://ppa.launchpad.net/zhshzhou/vdsm-ubuntu/ubuntu precise main
then:
apt-get update
apt-get install ovirt-guest-agent
You get an error:
Setting up ovirt-guest-agent (1.0.8.201309301944.gitb7f8f2-1ppa1) ...
chown: cannot access `/var/log/ovirt-guest-agent/ovirt-guest-agent.log':
No such file or directory
but it works:
ls -lashi /var/log/ovirt-guest-agent/ovirt-guest-agent.log
533212 4.0K -rw-r--r-- 1 ovirtagent ovirtagent 208 Mar 10 10:20
/var/log/ovirt-guest-agent/ovirt-guest-agent.log
service ovirt-guest-agent status
[ ok ] ovirt-guest-agent is running.
IP information shows up in ovirt.
it would be cool if the folder "precise" on the ppa could
be cloned to be a folder "wheezy".
So you would not need to change the apt-sources entries.
I don't know why this chown error occurs, but it seems to work!
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
10 years, 2 months
thread pool implementation
by fromani@redhat.com
Hello,
in order to reduce the number of sampling threads, we'd like to move from a one thread per VM
to a thread pool.
The strongest requirement we have is to be able to detect if a worker pool is not responding,
and if so to detach it from the pool and to kill it as soon as possible; then a new worker should
be made available.
This is because in sampling we are going to call libvirt and libvirt calls can block or, even worse,
get stuck (I'm looking at you virDomainGetBlockInfo -
http://libvirt.org/html/libvirt-libvirt.html#virDomainGetBlockInfo )
So, we need a thread pool implementation :)
What is the best way forward? I see a few options:
* we have a thread pool already in storage. Should we move it outside storage to lib/ and extend it?
* there is a thread pool hidden inside the multiprocessing module!
(see http://docs.python.org/2/library/multiprocessing.html#module-multiprocess...)
should we switch to this, at least for sampling?
* Python 3.2+ has concurrent.futures which has a nice API and can use a thread pool executor.
See http://docs.python.org/3.3/library/concurrent.futures.html#module-concurr...
There is a backport for python 2.6/2.7 also:
https://pypi.python.org/pypi/futures
Maybe this is the most forward compatible way?
* Add an(other) thread pool?
I don't really have any preference granted the requirement above is satisfied.
Thoughts? Especially Infra people's feedback would be appreciated.
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
10 years, 2 months
VDSM benchmarking and profiling, round 2
by fromani@redhat.com
Hello everyone,
it took a bit more than expected for the reason I am going to explain, but we have some
more results and better tooling.
Points open from the last round (not in strict priority order):
---------------------------------------------------------------
1. find a reliable application-wide profiling approach. Profile the whole VDSM,
not just the specific function (_startUnderlyingVm now, something else in future runs)
2. make the profiling more consistent and reliable;
do more runs (at least 10, the more the better); add the variance?
3. limit the profiling reports to the hotspots (the 32 most expensive calls)
4. show the callers to learn more about the wait calls
5. investigate where and why we spend time in StringIO
6. re-verify the impact of cPickle vs Pickle
7. benchmark BoundedSemaphore(s)
8. benchmark XML processing
9. add more scenarios, and more configurations in those scenarios
Quick summary of this round:
----------------------------
- added more user-visible metrics: time to a VM to come 'Up' from the time it is started
- benchmarking script automated and starting to be trustworty
- result data available as CSV files
- existing patches on gerrit deliver improvement (6-7% for cPickle, ~10% for xml caching)
Testing scenario is still the same for the previous round (next step is add more of them)
Please continue reading for more details.
Application-wide profiling
---------------------------
>From a python prospective, looks like yappi is still the best shot. The selling point of yappi
is it is designed to be low-overhead (well they claim so) and to work nicely with long-running
multi-threaded daemons, like VDSM.
We have a nice patchset on gerrit to integrate yappi on VDSM, courtesy of Nir Soffer
(http://gerrit.ovirt.org/26113).
With this one we should be able to capture VDSM-wide profiles more easily. I am going to integrate
my benchmark script(s) with it (see below)
It is not clear if yappi (or any python profiler) can help us to understand properly and deeply enough
how threads interacts (or misbehave) with each other, with the GIL and where we waste time.
Exploring a system-wide profiler, like sysprof or oprofile, may be an useful next step.
Improvements in profile/results collection
------------------------------------------
On this round I focused on working with the hotspots we found in the previous round and improving
the benchmark tool to make it more reliable.
The scenario is now run 32 times, the results averaged.
We consider the startup time as defined as T_up - T_start being
T_start: time of the submission of the create command to the engine through the REST API
T_up: time of the VM reported as UP by the engine
The purpose is to model what an user will see on a real scenario.
Results: (find the scripts and the raw CSV data here: https://github.com/mojaves/ovirt-tools/tree/master/benchmark
lacking a better place)
Considering the rest results of March 24:
baseline data: vanilla VDSM
$ lsres.py 20140324/*/*.csv
20140324/baseline/bench_20140324_102249.csv:
mean: 33.037s sd=2.133s (6.5%)
best: 14.181s sd=1.880s (13.3%)
worst: 50.356s sd=2.713s (5.4%)
total: 1057.188s sd=68.249s (6.5%)
sd is standard deviation considering one sample per run (32 on this case)
we consider a sample per run of the following:
mean: mean of the startup times per run
best: the best startup time per run (fastest VM)
worst: the worst statup time per run (slowest VM)
total: sum of all the startup times, per run
Now let's consider the impact of the performance patches
Applying the cPikcle patch: http://gerrit.ovirt.org/#/c/25860/
20140324/cpickle/bench_20140324_115215.csv:
mean: 30.645s sd=2.422s (7.9%)
best: 13.048s sd=4.302s (33.0%)
worst: 46.404s sd=2.114s (4.6%)
total: 980.655s sd=77.510s (7.9%)
Improvement is
* negligible for the best case
* roughly 10% for the mean
* roughly 8% for the worst
* roughly the 7% for the total
On top of cPickle, we add XML caching: http://gerrit.ovirt.org/#/c/17694/
20140324/xmlcache/bench_20140324_125232.csv:
mean: 27.630s sd=1.242s (4.5%)
best: 11.320s sd=1.224s (10.8%)
worst: 41.554s sd=1.873s (4.5%)
total: 884.155s sd=39.745s (4.5%)
Improvement is
* roughly 9% for the mean
* roughly 15% for the best
* roughly 11% for the worst
* roughly 10% for the total
Given the fact that both patches are beneficial on all the flows/possible scenarios
because they affect the most basic creation flow, I think we have some tangible benefits
here.
During the benchmarks, I was quite concerned about the reliability and reperibility of
those tests, so I runned over and over again (that is one of the reason it took longer than
expected).
On particular, I runned some benchmarks again on March 25 (aka yesterday) with those results:
20140325/baseline/bench_20140325_180500.csv:
mean: 27.984s sd=1.074s (3.8%)
best: 10.507s sd=1.604s (15.3%)
worst: 42.711s sd=1.996s (4.7%)
total: 895.479s sd=34.375s (3.8%)
20140325/cpickle/bench_20140325_185941.csv:
mean: 26.423s sd=1.413s (5.3%)
best: 9.785s sd=1.669s (17.1%)
worst: 40.833s sd=2.452s (6.0%)
total: 845.523s sd=45.218s (5.3%)
We can easily see the absolute values are better (baseline is close to the XML patch!)
The main change is I rebased VDSM against yesterday's master, but given the fact we haven't
performance patches being merged (at least none I was aware of after the namedtuple fix)
I think there are external factos in play.
When I run benchmarks, I let the hypervisor and the engine hosts do just benchmarking, but still
there are many factors in play that can explain the variance (for example both the hosts
are NOT specifically tuned for benchmarking, so daemons are running on background and so on).
What I think it matters most is the gain from the cPickle patch is still here:
* roughly 5% for the mean
* roughly 7% for the best case
* roughly 5% for the worst case
* roughly 6% for the total
So I think we can move to the next steps:
* add adding more scenarios
* add more test cases inside scenarios
* add more metrics? (maybe part of test cases)
* integrate profile collection with benchmarking
Suggestions and comments are welcome
Thanks,
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
10 years, 2 months
Fwd: Question about MOM
by dfediuck@redhat.com
Moving this to the vdsm list.
----- Forwarded Message -----
From: "Chuan Liao (Jason Liao, HPservers-Core-OE-PSC)" <chuan.liao(a)hp.com>
To: "Martin Sivak" <msivak(a)redhat.com>, alitke(a)redhat.com, "Doron Fediuck" <dfediuck(a)redhat.com>, "Gilad Chaplik" <gchaplik(a)redhat.com>
Cc: "Shang-Chun Liang (David Liang, HPservers-Core-OE-PSC)" <shangchun.liang(a)hp.com>, "Xiao-Lei Shi (Bruce, HP Servers-PSC-CQ)" <xiao-lei.shi(a)hp.com>
Sent: Wednesday, March 19, 2014 11:28:01 AM
Subject: Question about MOM
Hi All,
I am a new with MOM feature.
In my understanding, MOM is the collector both from host and guest and set the right policy to KSM and memory ballooning get better performance.
I am not sure how it has relationship with NUMA, does anyone can explain it to me?
On engine side, there is only one button with this feature: Sync MoM Policy, right?
On vdsm side, I saw the momIF is working for this, right?
Best Regards,
Jason Liao
10 years, 2 months
if ipv6 ready and hot to use it?
by bigclouds
hi,all
if now we can use ipv6 like ipv4, most basic use case is ok, like adding node,setup node's ipv6 address, multi gateway...
i tryied but failed. i did not know why.
where is the related code about ipv6 , bith engine and vdsm sides, please guide me.
thanks
10 years, 2 months
VDSM profiling results, round 1
by fromani@redhat.com
Hi everyone
I'd like to share the first round of profiling results for VDSM and my next steps.
Summary:
- experimented a couple of profiling approaches and found a good one
- benchmarked http://gerrit.ovirt.org/#/c/25678/ : it is beneficial, was merged
- found a few low-hanging fruits which seems quite safe to merge and beneficial to *all* flows
- started engagement with infra (see other thread) to have common and polished performance
tools
- test roadmap is shaping up, wiki/ML will be updated in the coming days
Please read through for a more detailed discussion. Every comment is welcome.
Disclaimer:
long mail, lot of content, please point out if something is missing or not clear enough
or if deserves more discussion.
+++
== First round results ==
First round of profiling was a follow-up of what I shown during the VDSM gathering.
The results file contains a full profile ordered by descending time.
In a nutshell: parallel start of 32 tiny VMs using engine REST API and a single hypervisor host.
VMs are tiny just because I want to stuff as much VMs I can in my mini-dell (16 GB ram, 4 core + HT CPUs)
It is worth to point out a few differences with respect to the *profile* (NOT the graphs)
I shown during the gathering:
- profile data is now collected using the profile decorator (see http://www.ovirt.org/Profiling_Vdsm)
just around Vm._startUnderlyingVm. The gathering profile was obtained using the yappi application-wide
profiler (see https://code.google.com/p/yappi/) and 40 VMs.
* why yappi?
I thought an application-wide profiler gathers more information and let us to have a better picture.
I actually still think that but I faced some yappi misbehaviour which I want to fix later;
function-level profile so far is easier to collect (just grab the data dumped to file).
* why 40 VMs?
I started with 64 but exausted my storage backing store :)
Will add more storage space in the next days, for the moment I stepped back to 32.
It is worth to note that while on one hand numbers change a bit (if you remember the old profile data
and the scary 80secs wasted on namedtuple), on the other hand the suspects are the same and the
relative positions are roughly the same.
So I believe our initial findings (namedtuple patch) and the plan are still valid.
== how it was done ==
I am still focusing just on the "monday morning" scenario (mass start of many VMs at the same time).
Each run consisted in a parallel start of 32 VMs as described in the result data.
VDSM was restarted between one run and the another.
engine was *NOT* restarted between runs.
individual profiles have been gathered after all the runs and the profile was extracted from the aggregation of them.
profile dumps are available to everyone, just drop me a note and I'll put the tarball somewhere.
please find attached the profile data as txt format. For easier consumption, they are also
available on pastebin:
baseline : http://paste.fedoraproject.org/86318/
namedtuple fix: http://paste.fedoraproject.org/86378/
pickle fix : http://paste.fedoraproject.org/86600/ (see below)
== hotspots ==
the baseline profile data highlights five major areas and hotspots:
1. internal concurrency (possible patch: http://gerrit.ovirt.org/#/c/25857/ - see below)
2. libvirt
3. XML processing (initial patch: http://gerrit.ovirt.org/#/c/17694/)
4. namedtuple (patch: http://gerrit.ovirt.org/#/c/25678/ - fixed, merged)
5. pickling (patch: http://gerrit.ovirt.org/#/c/25860/ - see below)
#4 is beneficial in the ISCSI path and it was already merged.
#1 shows some potential but it needs to be carefully evaluated to avoid performance regressions
on different scenarios (e.g. bigger machines than mine :))
#2 is basically outside of our control but it needs to be watched out
#3 and #5 are beneficial for all flows and scenarios and are safe to merge.
#5 is almost a no-brainer IMO
== Note about the third profile ==
When profiling the cPickle patch http://paste.fedoraproject.org/86600/
the tests turned out actually *slower* with respect the second profile with just the namedtuple
patch.
The hotspots seems to be around concurrency and libvirt:
location profile2(s) profile3(s) diff(s)
pthread.py:129(wait) 1230.640 1377.992 +147.28 (BAD)
virDomainCreateXML 155.171 175.681 +20.51 (BAD)
'select.epoll' objects 52.523 53.635 +1.112 (negligible)
expatbuilder.py:743(start_element_handler) 28.172 33.975 +5.803 (BAD?)
virDomainGetXMLDesc 23.947 23.217 -0.73 (negligible)
I'm OK with some variance (it is expected) but this is also a warning sign to be extra-carefully
in tuning the concurrency patch (bullet point #1 above). We should definitely evaluate more scenarios
before to merge it.
If we edge out those diffs, we see the cPickle patch has the (small) benefits we expect,
and I think it is 100% safe to merge. I already did some minimal extra-verification just in case.
== Next steps ==
For the near term (the coming days/next weeks)
* benchmark the remaining easy fixes which are beneficial for all flows
and quite safe to merge (XML processing being first) and to work to have them merged.
* polish scripts and benchmarking code, start to submit to infra for review
* continue investigation about our (in)famous BoundedSempahore (http://gerrit.ovirt.org/#/c/25857/)
to see if dropping it has regressions or other bad effects
* find other test scenarios
I also have noted all the suggestion received so far and I planning more test cases just for this scenario.
For example:
1. just start N QEMUs to obtain our lower bound (we cannot get faster than this)
2. run with different storage (NFS)
3. run with no storage
4. run with Guest OS installed on disks
And of course we need more scenarios.
Let me just repeat myself: those are just the first steps of a long journey.
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
10 years, 2 months