[Draft]Task Management API
by smizrahi@redhat.com
Dan rightly suggested I'd be more specific about what the task system is
instead of what the task system isn't.
The problem is that I'm not completely sure how it's going to work.
It also depends on the events mechanism.
This is my current working draft:
TaskInfo:
id string
methodName string
kwargs json-object (string keys variant values) *filtered to remove sensitive
information
getRunningTasks(filter string, filterType enum{glob, regexp})
Returns a list of TaskInfo of all tasks that their id's match the filter
That's it, not even stopTask()
As explained, I would like to offload handling to the subsystems.
In order to make things easier for the clients every subsystem can choose a
filed of the object to be of type OperationInfo.
This is a generic structure that the user has a generic way to track all tasks
on all subsystem with a report interface. The extraData field is for subsystem
specific data. This is where the storage subsystem would put, for example,
imageState (broken, degraded, optimized) data.
OperationInfo:
operationDescription string - something out of an agreed enum of strings
vaguely describing the operation at hand for
example "Copying", "Merging", "Deleting",
"Configuring", "Stopped", "Paused", ....
They must be known to the client so it can in
turn translate it in the UI. The also have to
remain relatively vague as they are part of the
interface meaning that new values will break old
clients so they have to be reusable.
stageDescription - Similar to operation description in case you want more
granularity, optional.
stage (int, int) - (5, 10) means 5 out of 10. 1 out of 1 implies the UI to not
display stage widgets.
percentage - 0-100, -1 means unknown.
lastError - (code, message) the same errors that can return for regular calls
extraData - json-object
For example creatVM will return once the object is created in VDSM.
getVmInfo() would return, amongst other things, the operation info.
For the case of preparing for launch it will be:
{"Creating", "configuring", (2, 4), 40, (0, ""),
{state="preparing for launch"}}
In the case of VM paused on EIO:
{"Paused", "Paused", (1, 1), -1, (123, "Error writing to disks"),
{state="paused"}}
Migration is a tricky one, it will be reported as a task while it's in progress
but all the information is available on the image operationInfo.
In the case of Migration:
{"Migration", "Configuring", (1, 3), -1, (0, ""), {status="Migrating"}}
For StorageConnection this is somewhat already the case but in simplified
version.
If you want to ask about any other operation I'd be more then happy to write my
suggestion for it.
Subsystems have complete freedom about how to set up the API.
For Storage you have Fixes() to start\stop operations.
Gluster is pretty autonomous once operations have been started.
Since operations return as soon as they are registered (persisted) or fail to
register, it makes synchronous programming a bit clunky.
vdsm.pauseVm(vmId) doesn't return when the VM is paused but when VDSM committed
it will try to pause it. This means you will have to poll in order to see if
the operation finished. For gluster, as an example, this is the only way we
can check that the operation finished.
For stuff we have a bit more control over vdsm will fire events using json-rpc
notifications sent to the clients. The will be in the form of:
{"method": "alert", "params": {
"alertName": <subsystem>(.<objectType>)?.<object>.(<subobject>., ...),
"operationInfo", OperationInfo}
}
The user can register to recive events using a glob or a regexp.
registering to vdsm.VM.* pop every time any VM has changed stage.
This means that whenever the task finishes, fails or gains significance progress
and VDSM is there to track it, an event will be sent to the client.
This means that the general flow is.
# Register operation
vmID = best_vm
host.VM.pauseVM(vmID)
while True:
opInfo = None
try:
event = host.waitForEvent("vdsm.VM.best_vm", timeout=10)
opInfo = event.opInfo
except VdsmDisconnectionError:
host.waitForReconnect()
host.vm.getVmInfo(vmID) # Double check that we didn't miss the event
continue
except Timeout:
# This is a long operation, poll to see that we didn't miss any event
# but more commonly, update percentage in the UI to show progress.
vmInfo = host.vm.getVmInfo(vmID)
opInfo = vmInfo.operationInfo
if opInfo.stage.number != op.stage.total:
# Operation in progress
updateUI(opInfo)
else:
# Operation completed
# Check that the state is what we expected it to be.
if oInfo.extraData.state == "paused":
return SUCCESS
else:
return opInfo.lastError
vdsm.waitForEvent(filterm, timeout) is a client side libvdsm helper operation.
Clients that access the raw API need to create thir own client side code to
filter out events and manage their distribution. I'm open to also defining
server side filters but I'm not sure whether it's worth it or just having
it be a boolean (all events or none) is sufficient.
This is a very simplified example but the general flow is clear.
Even if the connection is lost for 1 second or 4 days, the code
still works. Further more, the user can wait for multiple operations
in the same thread using:
host.waitForEvent("vdsm.VM.(best_vm_ever|not_so_good_vm)")
This means that the client can wait for a 100 VMs or all VMs (using wildecards)
in a mechanism similar "poll()" with minimal overhead. This also means that if
The fact that operations are registered means that even if connections is lost
due to VDSM crashing or the network crashing, the manager doesn't need to care
one the original command returns as it know the operation registered.
This doesn't mean that every operation must retry forever. How persistent
every method is can and should change between the different operations.
Also, it means that manager that didn't initiate an operation track it in
the same way as those that did. This makes clustered managers a lot easier
to implement as if one goes down a second one can more or less immediately with
minimal extra code.
11 years, 6 months
Managing async tasks
by agl@us.ibm.com
On today's vdsm call we had a lively discussion around how asynchronous
operations should be handled in the future. In an effort to include more people
in the discussion and to better capture the resulting conversation I would like
to continue that discussion here on the mailing list.
A lot of ideas were thrown around about how 'tasks' should be handled in the
future. There are a lot of ways that it can be done. To determine how we
should implement it, it's probably best if we start with a set of requirements.
If we can first agree on these, it should be easy to find a solution that meets
them. I'll take a stab at identifying a first set of POSSIBLE requirements:
- Standardized method for determining the result of an operation
This is a big one for me because it directly affects the consumability of the
API. If each verb has different semantics for discovering whether it has
completed successfully, then the API will be nearly impossible to use easily.
Sorry. That's my list :) Hopefully others will be willing to add other
requirements for consideration.
>From my understanding, task recovery (stop, abort, rollback, etc) will not be
generally supported and should not be a requirement.
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
11 years, 6 months
Meeting minutes
by Dan Kenigsberg
Speaking attendees (no particular order): Dustin, Saggi, Adam, Ayal,
Toni, Federico, Danken.
Issues raised:
- Danken thanks Adam for pulling active developers from Beijing to
#vdsm(a)irc.freenode.net . It is fun to chat on irc, it's quick, and may
be fruitful. So please autoconnect to #vdsm when you turn your desktop
on!
- ovirt-3.2 release: nothing urgent, appart of a NetworkManager
integration issue (see bellow). If there is a show stopper, please
rebase to the ovirt-3.2 branch, and ping Federico.
- When adding a VM network on top of a bond device, we take the relevant
devices down, we write their new ifcfg-* files (with
NM_CONTROLLED=no), and take them up again.
At this point, NM notices that the devices are no longer under its
management, and takes them down asynchronously. See
https://bugzilla.redhat.com/879180 .
This race may force us to turn NM off on installation, but we'd rather
have a finer-grain work-around. Pavel, maybe you can help us here.
- Task framework: we've spend a lot of time discussing with Saggi what a
task framework should not be. According to Saggi, Engine can never
expect to know that a task has finished, since it may loose
connection to the host running it. Thus, in the worst case, Engine has
to be able to poll for whatever entity was created/destroyed by that
task. Saggi suggests to make this worst case the only case.
He prefer to keep the current situation, where we have different verbs
to create and poll for vm creation, start and poll migration
progression, etc, and extend it to gluster tasks and storage image
creation/deletion/copying tasks.
To me it seems that an end user would benefit of having a unified view
of what is currently going on in vdsm, how it progresses, and whether
it can be stopped. However, this can be wrapped up nicely within the
client with no abstraction in Vdsm.
I may have well lost part of the discussion, so please correct me on
list I I misrepresented an opinion.
Regards,
Dan.
11 years, 6 months
Host bios information
by ybronhei@redhat.com
Today in the Api we display general information about the host that vdsm
export by getCapabilities Api.
We decided to add bios information as part of the information that is
displayed in UI under host's general sub-tab.
To summaries the feature - We'll modify General tab to Software
Information and add another tab for Hardware Information which will
include all the bios data that we'll decide to gather from the host and
display.
Following this feature page:
http://www.ovirt.org/Features/Design/HostBiosInfo for more details.
All the parameters that can be displayed are mentioned in the wiki.
I would greatly appreciate your comments and questions.
Thanks.
--
Yaniv Bronhaim.
RedHat, Israel
09-7692289
054-7744187
11 years, 6 months
VDSM tasks, the future
by smizrahi@redhat.com
Because I started hinting about how VDSM tasks are going to look going forward I thought it's better I'll just write everything in an email so we can talk about it in context.
This is not set in stone and I'm still debating things myself but it's very close to being done.
- Everything is asynchronous.
The nature of message based communication is that you can't have synchronous operations.
This is not really debatable because it's just how TCP\AMQP\<messaging> works.
- Task IDs will be decided by the caller.
This is how json-rpc works and also makes sense because no the engine can track the task without needing to have a stage where we give it the task ID back.
IDs are reusable as long as no one else is using them at the time so they can be used for synchronizing operations between clients (making sure a command is
only executed once on a specific host without locking).
- Tasks are transient
If VDSM restarts it forgets all the task information.
There are 2 ways to have persistent tasks:
1. The task creates an object that you can continue work on in VDSM.
The new storage does that by the fact that copyImage() returns one the target volume has been created but before the data has been fully copied.
From that moment on the stat of the copy can be queried from any host using getImageStatus() and the specific copy operation can be queried with getTaskStatus() on the host performing it.
After VDSM crashes, depending on policy, either VDSM will create a new task to continue the copy or someone else will send a command to continue the operation and that will be a new task.
2. VDSM tasks just start other operations track-able not through the task interface. For example Gluster.
gluster.startVolumeRebalance() will return once it has been registered with Gluster.
glster.getOperationStatuses() will return the state of the operation from any host.
Each call is a task in itself.
- No task tags.
They are silly and the caller can mangle whatever in the task ID if he really wants to tag tasks.
- No explicit recovery stage.
VDSM will be crash-only, there should be efforts to make everything crash-safe.
If that is problematic, in case of networking, VDSM will recover on start without having a task for it.
- No clean Task:
Tasks can be started by any number of hosts this means that there is no way to own all tasks.
There could be cases where VDSM starts tasks on it's own and thus they have no owner at all.
The caller needs to continually track the state of VDSM. We will have brodcasted events to mitigate polling.
- No revert
Impossible to implement safely.
- No SPM\HSM tasks
SPM\SDM is no longer necessary for all domain types (only for type).
What used to be SPM tasks, or tasks that persist and can be restarted on other hosts is talked about in previous bullet points.
11 years, 6 months
Request for consideration during the API revamp
by Vinzenz Feenstra
Hi,
When there is the attempt to enhance/change the current API, I would ask
you to consider to think also about the vdsClient use case.
I haven't read anything regarding that so far and therefore I just want
you to think about it as well.
My expectation is that the vdsClient will continue to use the RPC
interfaces, however since it is part of the VDSM project I think it
would be a good idea if there is a way for both vdsmd and vdsClient to
share constants used for the API.
That in turn also should simplify the maintenance of vdsClient.
Currently I see the constants used by both being defined on both sides
and I am pretty sure that this could be improved.
See this as just a thought on the whole redesign talk, but I would like
to see this kind of use cases to be covered. :-)
--
Regards,
Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
11 years, 6 months
blame and shame
by asegurap@redhat.com
Hi list!
Since I'm doing lately and I plan to continue to do patches to improve
pep8 compliance for the whole vdsm codebase, and a lot of that is
E126, E127 and E128, that deal with whitespaces, I have added to my
~/.gitconfig
[alias]
bl = blame -w
Which ignores whitespaces for the blame on the lines. This way, my name
will not be shown next to code I don't know about ;-) Of course, it would
be great if git blame where to be extended with pydiff so all the pep8
changes would be ignored for blaming purposes... But I'll leave that to
someone else ;-)
Best,
Toni
11 years, 6 months
Fwd: Bonding, bridges and ifcfg
by asegurap@redhat.com
Hello everybody,
We found some unexpected behavior with bonds and we'd like to discuss it.
Please, read the forwarded messages.
Best,
Toni
----- Forwarded Message -----
> From: "Dan Kenigsberg" <danken(a)redhat.com>
> To: "Antoni Segura Puimedon" <asegurap(a)redhat.com>
> Cc: "Livnat Peer" <lpeer(a)redhat.com>, "Igor Lvovsky" <ilvovsky(a)redhat.com>
> Sent: Monday, December 10, 2012 1:03:48 PM
> Subject: Re: Bonding, ifcfg and luck
>
> On Mon, Dec 10, 2012 at 06:47:58AM -0500, Antoni Segura Puimedon
> wrote:
> > Hi all,
> >
> > I discussed this briefly with Livnat over the phone and mentioned
> > it to Dan.
> > The issue that we have is that, if I understand correctly our
> > current
> > configNetwork, it could very well be that it works by means of good
> > design with
> > a side-dish of luck.
> >
> > I'll explain myself:
> > By design, as documented in
> > http://www.kernel.org/doc/Documentation/networking/bonding.txt:
> > "All slaves of bond0 have the same MAC address (HWaddr) as bond0
> > for all modes
> > except TLB and ALB that require a unique MAC address for each
> > slave."
> >
> > Thus, all operations on the slave interfaces after they are added
> > to the bond
> > (except on TLB and ALB modes) that rely on ifcfg will fail with a
> > message like:
> > "Device eth3 has different MAC address than expected, ignoring.",
> > and no
> > ifup/ifdown will be performed.
> >
> > Currently, we were not noticing this, because we were ignoring
> > completely
> > errors in ifdown and ifup, but http://gerrit.ovirt.org/#/c/8415/
> > shed light on
> > the matter. As you can see in the following example (bonding mode
> > 4) the
> > behavior is just as documented:
> >
> > [root@rhel64 ~]# cat /sys/class/net/eth*/address
> > 52:54:00:a2:b4:50
> > 52:54:00:3f:9b:28
> > 52:54:00:51:50:49
> > 52:54:00:ac:32:1b <-----------------
> > [root@rhel64 ~]# echo "+eth2" >
> > /sys/class/net/bond0/bonding/slaves
> > [root@rhel64 ~]# echo "+eth3" >
> > /sys/class/net/bond0/bonding/slaves
> > [root@rhel64 ~]# cat /sys/class/net/eth*/address
> > 52:54:00:a2:b4:50
> > 52:54:00:3f:9b:28
> > 52:54:00:51:50:49
> > 52:54:00:51:50:49 <-----------------
> > [root@rhel64 ~]# echo "-eth3" >
> > /sys/class/net/bond0/bonding/slaves
> > [root@rhel64 ~]# cat /sys/class/net/eth*/address
> > 52:54:00:a2:b4:50
> > 52:54:00:3f:9b:28
> > 52:54:00:51:50:49
> > 52:54:00:ac:32:1b <-----------------
> >
> > Obviously, this means that, for example, when we add a bridge on
> > top of a bond,
> > the ifdown, ifup of the bond slaves will be completely fruitless
> > (although
> > luckily that doesn't prevent them from working).
>
>
> Sorry, thi is not obvious to me.
> When we change something in a nic, we first take it down (which break
> it
> away from the bond), change it, and then take it up again (and back
> to
> the bond).
>
> I did not understand which flow of configuration leads us to the
> "unexpected mac" error. I hope that we can circumvent it.
>
>
> >
> > To solve this issue on the ifcfg based operation we could either:
> > - Continue ignoring these issues and either not do ifup ifdown for
> > bonding
> > slaves or catch the specific error and ignore it.
>
> That's reasonable, for a hack.
>
> > - Modify the ifcfg files of the slaves after they are enslaved to
> > reflect the
> > MAC addr of /sys/class/net/bond0/address. Modify the ifcfg files
> > after the
> > bond is destroyed to reflect their own addresses as in
> > /sys/class/net/ethx/address
>
> I do not undestand this solution at all... Fixing initscripts to
> expect
> the permanent mac address instead of the bond's one makes more sense
> to
> me. ( /proc/net/bonding/bond0 has "Permanent HW addr: " )
>
> >
> > Livnat made me note that this behavior can be a problem to the anti
> > mac-spoofing rules that we add to iptables, as they rely on the
> > identity device
> > -macaddr to work and, obviously, in most bonding modes that is
> > broken unless
> > the device's macaddr is the one chosen for the bond.
>
> Right. I suppose we can open a bug about it: in-guest bond does not
> work
> with mac-no-spoofing. I have a vague memory of discussing this with
> lpeer few months back, but it somehow slipped my mind.
>
>
> > Well, I think that is all for this issue. We should discuss which
> > is the best
> > approach for this before we move on with patches that account for
> > ifup ifdown
> > return information.
> >
> > Best,
> >
> > Toni
> >
>
11 years, 6 months
v4.10.3 tagged
by Dan Kenigsberg
I've just tagged v4.10.3 of vdsm. It has several interesting changes
since the previous release (some are detailed below), and I would like
to suggest it as a beta candidate for ovirt-3.2.
I know this happens quite behind schedule; my poor man's comfort is that
Engine is even behind vdsm...
v4.10.3 should work well on Fedora 18 (with some glitches regarding
NetworkManager and firewalld). Please report bug and problems to
Bugzilla and to this list. If you think your bug should block the
release of ovirt-3.2, mark it as such (Bug 881006 - (ovirt-3.2-release)
Tracker: oVirt 3.2 release)
- SSL session cache (the return of m2crypto)
- pep8 fixes
- blockSD bug fixes
- requiring mom
- storage and VM functional tests
- gluster cli xml output
- retire ifconfig (and net-tools) and other fixes to run on Fedora 18
- Remove REST bindings
- vmfex hook
- use default libvirt event handler impl
- network re-wiring of a running VM
- optional virt-console.
Regards,
Dan.
11 years, 6 months