why vm.create preparepath then teardown
by Jenna Johnson
Folks,
I came across a problem when starting VM, and think maybe related to
the following.
In API.VM.create() function, we prepareVolumePath, and then
teardownVolumePath, any idea why we handle it like this?
fname = self._cif.prepareVolumePath(paramFilespec)
try:
....
finally:
self._cif.teardownVolumePath(paramFilespec)
12 years, 1 month
libvirtError: Cannot write data: Broken pipe when vdsm try to call libvirt
by shuming@linux.vnet.ibm.com
Hi,
Recently, I found that my host in engine was always in a "unassigned
state" after the host node was installed. After looking into the
vdsm.log, it seemed that vdsm failed to call libvirt as an error,
"libvirtError: Cannot write data: Broken pipe". When I started virsh
in the host node at that time, a warning was given "WARNING: no socket
to connect to" and core dumped with "virsh net-list". It looks like
that no right socket was created for virsh to connect to libvirtd. Any
comments about this problem? The followings are my steps in the node:
[root@ovirt-node1 ~]# rpm -qa |grep vdsm
vdsm-cli-4.9.6-0.183.git107644d.fc16.shuming1336622293.noarch
vdsm-python-4.9.6-0.183.git107644d.fc16.shuming1336622293.noarch
vdsm-hook-vhostmd-4.9.6-0.183.git107644d.fc16.shuming1336622293.noarch
vdsm-4.9.6-0.183.git107644d.fc16.shuming1336622293.x86_64
vdsm-reg-4.9.6-0.183.git107644d.fc16.shuming1336622293.noarch
vdsm-debug-plugin-4.9.6-0.183.git107644d.fc16.shuming1336622293.noarch
vdsm-hook-faqemu-4.9.6-0.183.git107644d.fc16.shuming1336622293.noarch
vdsm-bootstrap-4.9.6-0.183.git107644d.fc16.shuming1336622293.noarch
[root@ovirt-node1 ~]#
[root@ovirt-node1 ~]# ps -ef |grep libvirt
libvirt-daemon-0.9.11-1.fc17.x86_64
libvirt-daemon-config-nwfilter-0.9.11-1.fc17.x86_64
libvirt-client-0.9.11-1.fc17.x86_64
libvirt-daemon-config-network-0.9.11-1.fc17.x86_64
libvirt-python-0.9.11-1.fc17.x86_64
[root@ovirt-node1 ~]# virsh net-list
WARNING: no socket to connect to
Segmentation fault
[root@ovirt-node1 ~]#
[root@ovirt-node1 ~]# ps -ef |grep vdsm
root 1299 1 0 23:10 ? 00:00:00 /usr/sbin/libvirtd
--listen # by vdsm
vdsm 1917 1 0 23:10 ? 00:00:00 /bin/bash -e
/usr/share/vdsm/respawn --minlifetime 10 --daemon --masterpid
/var/run/vdsm/respawn.pid /usr/share vdsm/vdsm
vdsm 1919 1917 0 23:10 ? 00:00:06 /usr/bin/python
/usr/share/vdsm vdsm
root 1940 1919 0 23:10 ? 00:00:00 /usr/bin/sudo -n
/usr/bin/python /usr/share/vdsm/supervdsmServer.py
709dfdde-a668-4227-a206-3d8686b4cfa1 1919
root 1941 1940 0 23:10 ? 00:00:00 /usr/bin/python
/usr/share/vdsm/supervdsmServer.py 709dfdde-a668-4227-a206-3d8686b4cfa1 1919
root 3711 3055 0 23:22 pts/0 00:00:00 vim /var/log/vdsm/vdsm.log
root 4358 4103 0 23:30 pts/1 00:00:00 grep --color=auto vdsm
[
[root@ovirt-node1 ~]# ps -ef |grep libvirtd
root 4421 1 2 23:31 ? 00:00:00 /usr/sbin/libvirtd
--listen # by vdsm
root 4750 4103 0 23:31 pts/1 00:00:00 grep --color=auto libvirtd
--
Shu Ming<shuming(a)linux.vnet.ibm.com>
IBM China Systems and Technology Laboratory
12 years, 1 month
[PATCH libvirt] Reject any non-option command line arguments
by Daniel P. Berrange
From: "Daniel P. Berrange" <berrange(a)redhat.com>
Due to a bug in editing /etc/sysconfig/libvirtd, VDSM was causing
libvirt processes to run with the following command line args
/usr/sbin/libvirtd --listen '#' 'by vdsm'
While it correctly rejects any invalid option flags, libvirtd
was not rejecting any non-option command line arguments
* daemon/libvirtd.c: Reject non-option argv
---
daemon/libvirtd.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c
index 2696c54..0b5ae35 100644
--- a/daemon/libvirtd.c
+++ b/daemon/libvirtd.c
@@ -999,6 +999,12 @@ int main(int argc, char **argv) {
}
}
+ if (optind != argc) {
+ fprintf(stderr, "%s: unexpected, non-option, command line arguments\n",
+ argv[0]);
+ exit(EXIT_FAILURE);
+ }
+
if (!(config = daemonConfigNew(privileged))) {
VIR_ERROR(_("Can't create initial configuration"));
exit(EXIT_FAILURE);
--
1.7.10.1
12 years, 1 month
RESTful VM creation
by agl@us.ibm.com
I would like to discuss a problem that is going to affect VM creation in the new
REST API. This topic has come up previously and I want to revive that
discussion because it is blocking a proper implementation of VM.create().
Consider a RESTful VM creation sequence:
POST /api/vms/define - Define a new VM in the system
POST /api/vms/<id>/disks/add - Add a new disk to the VM
POST /api/vms/<id>/cdroms/add - Add a cdrom
POST /api/vms/<id>/nics/add - Add a NIC
PUT /api/vms/<id> - Change boot sequence
POST /api/vms/<id>/start - Boot the VM
Unfortunately this is not possible today with vdsm because a VM must be
fully-specified at the time of creation and it will be started immediately.
As I see it there are two ways forward:
1.) Deviate from a REST model and require a VM resource definition to include
all sub-collections inline.
-- or --
2.) Support storage of vm definitions so that powered off VMs can be manipulated
by the API.
My preference would be #2 because: it makes the API more closely follow RESTful
principles, it maintains parity with the cluster-level VM manipulation API, and
it makes the API easier to use in standalone mode.
Here is my idea on how this could be accomplished without committing to stateful
host storage. In the past we have discussed adding an API for storing arbitrary
metadata blobs on the master storage domain. If this API were available we
could use it to create a transient VM "construction site". Let's walk through
the above RESTful sequence again and see how my idea would work in practice:
* POST /api/vms/define - Define a new VM in the system
A new VM definition would be written to the master storage domain metadata area.
* GET /api/vms/<new-uuid>
The normal 'list' API is consulted as usual. The VM will not be found there
because it is not yet created. Next, the metadata area is consulted. The VM is
found there and will be returned. The VM state will be 'New'.
* POST /api/vms/<id>/disks/add - Add a new disk to the VM
For 'New' VMs, this will update the VM metadata blob with the new disk
information. Otherwise, this will call the hotplugDisk API.
* POST /api/vms/<id>/cdroms/add - Add a cdrom
For 'New' VMs, this will update the VM metadata blob with the new cdrom
information. If we want to support hotplugged CDROMs we can call that API
later.
* POST /api/vms/<id>/nics/add - Add a NIC
For 'New' VMs, this will update the VM metadata blob with the new nic
information. Otherwise it triggers the hotplugNic API.
* PUT /api/vms/<id> - Change boot sequence
Only valid for 'New' VMs. Updates the metadata blob according to the parameters
specified.
* POST /api/vms/<id>/start - Boot the VM
Load the metadata from the master storage domain metadata area. Call the
VM.create() API. Remove the metadata from the master storage domain.
VDSM will automatically purge old metadata from the master storage domain. This
could be done any time a domain is: attached as master, deactivated, and
periodically.
How does this idea sound? I am certain that it can be improved by those of you
with more experience and different viewpoints. Thoughts and comments?
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
12 years, 1 month
Re-code /etc/init.d/functions script with Python and move it to vdsm-tool
by wenyi@linux.vnet.ibm.com
Hi All,
I am working on moving vdsm.init script to vdsm-tool. But the vdsm.init
script uses some of functions from /etc/init.d/functions. So I plan to
re-code the /etc/init.d/functions or part of it with python code and
also move it to vdsm-tool. Is it okey?
BR.
Wenyi
12 years, 1 month
RFD: NEW API getAllTasks
by agl@us.ibm.com
The current APIs for retrieving all task information do not actually return all
task information. I would like to introduce a new API that corrects this and
other issues with the current API while preserving backwards compatibility with
ovirt-engine for as long as is necessary.
The current APIs:
getAllTasksInfo(spUUID=None, options = None):
- Returns a dictionary that maps a task UUID to a task verb.
- Despite having 'all' in the name, this API only returns tasks that have an
'spm' tag.
- This call returns only one piece of information for each task.
- The spUUID parameter is deprecated and ignored.
getAllTasksStatuses(spUUID=None, options = None):
- Returns a dictionary of task status information.
- Despite having 'all' in the name, this API only returns tasks that have an
'spm' tag.
- The spUUID parameter is deprecated and ignored.
I propose the following new API:
getAllTasks(tag=None, options=None):
- Returns a dictionary of task information. The info from both of the above
functions would be merged into a single result set.
- If tag is None, all tasks are returned. otherwise, only tasks matching the
tag are returned.
- The spUUID parameter is dropped. options is for future extension and is
currently not used.
This new API includes all functionality that is available in the old calls. In
the future, ovirt-engine could switch to this API and preserve the current
semantics by passing tag='spm' to getAllTasks. Meanwhile, API users that really
want all tasks (gluster and the REST API) can get what they need.
Thoughts on this idea?
--
Adam Litke <agl(a)us.ibm.com>
IBM Linux Technology Center
12 years, 1 month
Re: [vdsm] [node-devel] Still not abel to migrate to node
by Michel van Horssen
Hi Mike,
> This strikes me as more of a vdsm problem than ovirt-node directly.
> Once node is registered to engine, we hand over all control of
> libvirt and networking (among other things) to the engine to manage.
I just tried migrating between 2 nodes but that also fails.
Doing a connect between the 2 nodes from a virsh I get the same connection refused I get when trying to connect the node from the vdsm on the engine server. So they can't do qemu+tls:/ipaddress/system to each other. Stranger still I just tried the qemu+tls connect to the vdsm which worked before, now it's also a connection refused I get back.
> If migration is failing due to some setting on the node, then vdsm
> should probably be changing that setting when it comes online.
I'll attache a grep from the vdsm.log while the migration was started from nodes. Maybe someone can take a look and hopefully find something.
Time difference between the servers is about 40 seconds.
> Mike
Michel
12 years, 1 month
Need I set "Verified" when submitting a patch ?
by wudxw@linux.vnet.ibm.com
Hi Guys,
I think people always test their patch before submitting, so
explicitly setting "verified" is not necessary. More importantly,
"verified" by the committer is not convincible //enough for code
quality assurance. What's your opinion?
Thanks!
Mark.
12 years, 1 month
oVirt Workshop at LinuxCon Japan 2012
by Leslie Hawthorn
Hello everyone,
As part of our efforts to raise awareness of and educate more developers
about the oVirt project, we will be holding an oVirt workshop at
LinuxCon Japan, taking place on June 8, 2012. You can find full details
of the workshop agenda on the LinuxCon Japan site. [0]
Registration for the workshop is now open and is free of charge for the
first 50 participants. We will also look at adding additional
participant slots to the workshop based on demand.
Attendees who register for LinuxCon Japan via the workshop registration
link [1] will also be eligible for a discount on their LinuxCon Japan
registration.
Please spread the word to folks you think would find the workshop
useful. If they have already registered for LinuxCon Japan, they can
simply edit their existing registration to include the workshop.
[0] -
https://events.linuxfoundation.org/events/linuxcon-japan/ovirt-gluster-wo...
[1] - http://www.regonline.com/Register/Checkin.aspx?EventID=1099949
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn
12 years, 1 month
Re: [vdsm] Storage Device Management in VDSM and oVirt
by Dan Kenigsberg
On Wed, Apr 18, 2012 at 09:06:36AM -0400, Ayal Baron wrote:
>
>
> ----- Original Message -----
> > On Tue, Apr 17, 2012 at 03:38:25PM +0530, Shireesh Anjal wrote:
> > > Hi all,
> > >
> > > As part of adding Gluster support in ovirt, we need to introduce
> > > some Storage Device management capabilities (on the host). Since
> > > these are quite generic and not specific to Gluster as such, we
> > > think it might be useful to add it as a core vdsm and oVirt
> > > feature.
> > > At a high level, this involves following:
> > >
> > > - A "Storage Devices" sub-tab on "Host" entity, displaying
> > > information about all the storage devices*
> > > - Listing of different types of storage devices of a host
> > > - Regular Disks and Partitions*
> > > - LVM*
> > > - Software RAID*
> > > - Various actions related to device configuration
> > > - Partition disks*
> > > - Format and mount disks / partitions*
> > > - Create, resize and delete LVM Volume Groups (VGs)
> > > - Create, resize, delete, format and mount LVM Logical Volumes
> > > (LVs)
> > > - Create, resize, delete, partition, format and mount Software
> > > RAID devices
> > > - Edit properties of the devices
> > > - UI can be modeled similar to the system-config-lvm tool
> > >
> > > The items marked with (*) in above list are urgently required for
> > > the Gluster feature, and will be developed first.
> > >
> > > Comments / inputs welcome.
> >
> > This seems like a big undertaking, and I would like to understand the
> > complete use case of this. Is it intended to create the block storage
> > devices on top of which a Gluster volume will be created?
>
> Yes, but not only.
> It could also be used to create the file system on top of which you create a local storage domain (just an example, there are many others, more listed below).
>
> >
> > I must tell that we had a bad experience with exposing low level
> > commands over the Vdsm API: A Vdsm storage domain is a VG with some
> > metadata on top. We used to have two API calls for creating a storage
> > domain: one to create the VG and one to add the metadata and make it
> > an
> > SD. But it is pretty hard to handle all the error cases remotely. It
> > proved more useful to have one atomic command for the whole sequence.
> >
> > I suspect that this would be the case here, too. I'm not sure if
> > using
> > Vdsm as an ssh-replacement for transporting lvm/md/fdisk commands is
> > the
> > best approach.
>
> I agree, we should either provide added value or figure out a way where we don't need to simply add a verb every time the underlying APIs added something.
>
> >
> > It may be better to have a single verb for creating Gluster volume
> > out
> > of block storage devices. Something like: "take these disks,
> > partition
> > them, build a raid, cover with a vg, carve some PVs and make each of
> > them a Gluster volume".
> >
> > Obviously, it is not simple to define a good language to describe a
> > general architecture of a Gluster voluem. But it would have to be
> > done
> > somewhere - if not in Vdsm then in Engine; and I suspect it would be
> > better done on the local host, not beyond a fragile network link.
> >
> > Please note that currently, Vdsm makes a lot of effort not to touch
> > LVM
> > metadata of existing VGs on regular "HSM" hosts. All such operations
> > are
> > done on the engine-selected "SPM" host. When implementing this, we
> > must
> > bear in mind these safeguards and think whether we want to break
> > them.
>
> I'm not sure I see how this is relevant, we allow creating a VG on any host today and that isn't going to change...
We have one painful exception, that alone is no reason to add more. Note
that currently, Engine uses the would-be-spm for vg creation. In the
gluster use case, any host is expected to do this on async timing. It
might be required, but it's not warm and fuzzy.
>
> In general, we know that we already need to support using a LUN even if it has partitions on it (with force or something).
>
> We know that we have requirements for more control over the VG that we create e.g. support striping, control over max LV size (limited by pv extent size today) etc.
>
> We also know that users would like a way not only to use a local dir for a storage domain but also create the directory + fs?
These three examples are storage domain flavors..
>
> We know that in the gLuster use case we would like the ability to setup samba over the gluster volume as well as iSCSI probably.
Now I do not see the relevance. Configuring gluster and how it exposes
its volume is something other than preparing block storage for gluster
bricks.
>
> So although I believe that when we create a gluster volume or an ovirt storage domain then indeed we shouldn't need a lot of low level commands, but it would appear to me that not allowing for more control when needed is not going to work and that there are enough use cases which do not involve a gluster volume nor a storage domain to warrant this to be generic.
I'm not against more control; I'm against uncontrollable API such as
runThisLvmCommandAsRoot()
12 years, 1 month