Agenda for today's call
by abaron@redhat.com
Hi all,
I would like to discuss the following on today's call:
1. Gerrit vs. mailing list
2. mandatory unittest per patch
3. pep8
4. ??
If you have anything else you'd like to discuss, please reply to this email.
Regards,
Ayal.
11 years, 10 months
Using vdsm hook to exploit gluster backend of qemu
by deepakcs@linux.vnet.ibm.com
Hello,
Recently there were patches posted in qemu-devel to support gluster
as a block backend for qemu.
This introduced new way of specifying drive location to qemu as ...
-drive file=gluster:<volumefile>:<image name>
where...
volumefile is the gluster volume file name ( say gluster volume is
pre-configured on the host )
image name is the name of the image file on the gluster mount point
I wrote a vdsm standalone script using SHAREDFS ( which maps to PosixFs
) taking cues from http://www.ovirt.org/wiki/Vdsm_Standalone
The conndict passed to connectStorageServer is as below...
[dict(id=1, connection="kvmfs01-hs22:dpkvol", vfs_type="glusterfs",
mnt_options="")]
Here note that 'dpkvol' is the name of the gluster volume
I and am able to create and invoke a VM backed by a image file residing
on gluster mount.
But since this is SHAREDFS way, the qemu -drive cmdline generated via
VDSM is ...
-drive file=/rhev/datacentre/mnt/.... -- which eventually softlinks to
the image file on the gluster mount point.
I was looking to write a vdsm hook to be able to change the above to ....
-drive file=gluster:<volumefile>:<image name>
which means I would need access to some of the conndict params inside
the hook, esp. the 'connection' to extract the volume name.
1) In looking at the current VDSM code, i don't see a way for the hook
to know anything abt the storage domain setup. So the only
way is to have the user pass a custom param which provides the path to
the volumefile & image and use it in the hook. Is there
a better way ? Can i use the vdsm gluster plugin support inside the hook
to determine the volfile from the volname, assuming I
only take the volname as the custom param, and determine imagename from
the existing <source file = ..> tag ( basename is the
image name). Wouldn't it be better to provide a way for hooks to access
( readonly) storage domain parameters, so that they can
use that do implement the hook logic in a more saner way ?
2) In talking to Eduardo, it seems there are discussion going on to see
how prepareVolumePath and prepareImage could be exploited
to fit gluster ( and in future other types) based images. I am not very
clear on the image and volume code of vdsm, frankly its very
complex and hard to understand due to lack of comments.
I would appreciate if someone can guide me on what is the best way to
achive my goal (-drive file=gluster:<volumefile>:<image name>)
here. Any short term solutions if not perfect solution are also
appreciated, so that I can atleast have a working setup where I just
run my VDSM standaloen script and my qemu cmdline using gluster:... is
generated.
Currently I am using <qemu:commandline> tag facility of libvirt to
inject the needed qemu options and hardcoding the volname, imagename
but i would like to do this based on the conndict passed by the user
when creating SHAREDFS domain.
thanx,
deepak
11 years, 10 months
[RFC] An alternative way to provide a supported interface -- libvdsm
by Anthony Liguori
Hi,
I've been reading through the API threads here and considering the options. To
be honest, I worry a lot about the scope of these discussions and that there's a
tremendous amount of work before we have a useful end result.
I wonder if we can solve this problem by adding another layer of abstraction...
As Adam is currently building a schema for VDSM's XML-RPC, we could use the QAPI
code generators to build a libvdsm that provided a programmatic C interface for
the XML-RPC interface.
It would take some tweaking, but this could be made a supportable C interface.
The rules for having a supportable C interface are basically:
1) Never change function signatures
2) Never remove functions
3) Always allocate structures in the library and/or pad
4) Only add to structures, never remove or reorder
5) Provide flags that default to zero to indicate that fields/features are not
present.
6) Always zero-initialize structures
Having a libvdsm would allow the transport to change over time w/o affecting
end-users. There are lots of good tools for documenting C APIs and dealing with
versioning of C APIs.
While we can start out with a schema-generated API, over time, we can implement
libvdsm in an open-coded fashion allowing old APIs to be reimplemented in terms
of new APIs.
From a compatibility perspective, libvdsm would be fully backwards compatible
with old versions of VDSM (so it would keep XML-RPC support forever) but may
require new versions of libvdsm to talk to new versions of VDSM. That would
allow for APIs to be deprecated within VDSM without breaking old clients.
I think this would be an incremental approach to building a supportable API
today while still giving the flexibility to make changes in the long term.
And it should be fairly easy to generate a JNI binding and also port
ovirt-engine to use an interface like this (since it already uses the XML-RPC API).
Regards,
Anthony Liguori
11 years, 10 months
Agenda for tomorrow's call
by Dan Kenigsberg
Hi!
tomorrow I would like to discuss:
- the abysmal review condition of the rest api patches
- vdsm status for ovirt-3.1
I know networking requires a heavy cherry-pick from upstream. There
is probably more.
Everybody invited to care for vdsm bugs that blocks Bug 822145 -
Tracker: oVirt 3.1 release.
- plenty pep8 patches applied, but there is plenty more.
- Patches with pending verification. I see 11 of those now
http://gerrit.ovirt.org/#/q/status:open+project:vdsm+verified%253D0+coder...
Please do not send your patches out to the cold and desert them there.
Pet them, nag folks to review and verify them, and rebase (only!) when
required.
- Your issue comes here (or above, if it's more urgent).
Regards,
Dan.
11 years, 11 months
[virt-node] VDSM as a general purpose virt host manager
by smizrahi@redhat.com
I would like to put on to the table for descussion the growing need for a way
to more easily reuse of the functionality of VDSM in order to service projects
other than Ovirt-Engine.
Originally VDSM was created as a proprietary agent for the sole purpose of
serving the then proprietary version of what is known as ovirt-engine. Red Hat,
after acquiring the technology, pressed on with it's commitment to open source
ideals and released the code. But just releasing code into the wild doesn't
build a community or makes a project successful. Further more when building
open source software you should aspire to build reusable components instead of
monolithic stacks.
We would like to expose a stable, documented, well supported API. This gives
us a chance to rethink the VDSM API from the ground up. There is already work
in progress of making the internal logic of VDSM separate enough from the API
layer so we could continue feature development and bug fixing while designing
the API of the future.
In order to achieve this though we need to do several things:
1. Declare API supportability guidelines
2. Decide on an API transport (e.g. REST, ZMQ, AMQP)
3. Make the API easily consumable (e.g. proper docs, example code, extending
the API, etc)
4. Implement the API itself
All of these are dependent on one another and the permutations are endless.
This is why I think we should try and work on each one separately. All
discussions will be done openly on the mailing list and until the final version
comes out nothing is set in stone.
If you think you have anything to contribute to this process, please do so
either by commenting on the discussions or by sending code/docs/whatever
patches. Once the API solidifies it will be quite difficult to change
fundamental things, so speak now or forever hold your peace. Note that this is
just an introductory email. There will be a quick follow up email to kick start
the discussions.
11 years, 11 months
RFC: Writeup on VDSM-libstoragemgmt integration
by deepakcs@linux.vnet.ibm.com
Hello All,
I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and
crystallize it, before putting it on the ovirt wiki.
I have run this once thru Ayal and Tony, so have some of their comments
incorporated.
I still have few doubts/questions, which I have posted below with lines
ending with '?'
Comments / Suggestions are welcome & appreciated.
thanx,
deepak
[Ccing engine-devel and libstoragemgmt lists as this stuff is relevant
to them too]
--------------------------------------------------------------------------------------------------------------
1) Background:
VDSM provides high level API for node virtualization management. It acts
in response to the requests sent by oVirt Engine, which uses VDSM to do
all node virtualization related tasks, including but not limited to
storage management.
libstoragemgmt aims to provide vendor agnostic API for managing external
storage array. It should help system administrators utilizing open
source solutions have a way to programmatically manage their storage
hardware in a vendor neutral way. It also aims to facilitate management
automation, ease of use and take advantage of storage vendor supported
features which improve storage performance and space utilization.
Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
libstoragemgmt (LSM) today supports C and python plugins for talking to
external storage array using SMI-S as well as native interfaces (eg:
netapp plugin )
Plan is to grow the SMI-S interface as needed over time and add more
vendor specific plugins for exploiting features not possible via SMI-S
or have better alternatives than using SMI-S.
For eg: Many of the copy offload features require to use vendor specific
commands, which justifies the need for a vendor specific plugin.
2) Goals:
2a) Ability to plugin external storage array into oVirt/VDSM
virtualization stack, in a vendor neutral way.
2b) Ability to list features/capabilities and other statistical
info of the array
2c) Ability to utilize the storage array offload capabilities from
oVirt/VDSM.
3) Details:
LSM will sit as a new repository engine in VDSM.
VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
Current plan is to have LSM co-exist with VDSM on the virtualization nodes.
*Note : 'storage' used below is generic. It can be a file/nfs-export for
NAS targets and LUN/logical-drive for SAN targets.
VDSM can use LSM and do the following...
- Provision storage
- Consume storage
3.1) Provisioning Storage using LSM
Typically this will be done by a Storage administrator.
oVirt/VDSM should provide storage admin the
- ability to list the different storage arrays along with their
types (NAS/SAN), capabilities, free/used space.
- ability to provision storage using any of the array capabilities
(eg: thin provisioned lun or new NFS export )
- ability to manage the provisioned storage (eg: resize/delete storage)
Once the storage is provisioned by the storage admin, VDSM will have to
refresh the host(s) for them to be able to see the newly provisioned
storage.
3.1.1) Potential flows:
Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is
needed to make LUN available to list of hosts passed by mgmt
Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices)
Repeat above for all relevant hosts (depending on list passed earlier,
mostly relevant when extending an existing VG)
Mgmt -> use LUN in normal flows.
3.1.2) How oVirt Engine will know which LSM to use ?
Normally the way this works today is that user can choose the host to
use (default today is SPM), however there are a few flows where mgmt
will know which host to use:
1. extend storage domain (add LUN to existing VG) - Use SPM and make
sure *all* hosts that need access to this SD can see the new LUN
2. attach new LUN to a VM which is pinned to a specific host - use this host
3. attach new LUN to a VM which is not pinned - use a host from the
cluster the VM belongs to and make sure all nodes in cluster can see the
new LUN
Flows for which there is no clear candidate (Maybe we can use the SPM
host itself which is the default ?)
1. create a new disk without attaching it to any VM
2. create a LUN for a new storage domain
3.2) Consuming storage using LSM
Typically this will be done by a virtualization administrator
oVirt/VDSM should allow virtualization admin to
- Create a new storage domain using the storage on the array.
- Be able to specify whether VDSM should use the storage offload
capability (default) or override it to use its own internal logic.
4) VDSM potential changes:
4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk ?
which bring another question...1 array == 1 storage domain OR 1
LUN/nfs-export on the array == 1 storage domain ?
Pros & Cons of each...
1 array == 1 storage domain
- Each new vmdisk (aka volume) will be a new lun/file on the array.
- Easier to exploit offload capabilities, as they are available at
the LUN/File granularity
- Will there be any issues where there will be too many LUNs/Files
... any maxluns limit on linux hosts that we might hit ?
-- VDSM has been tested with 1K LUNs and it worked fine - ayal
- Storage array limitations on the number of LUNs can be a downside
here.
- Would it be ok to share the array for hosting another storage
domain if need be ?
-- Provided the existing domain is not utilising all of the
free space
-- We can create new LUNs and hand it over to anyone needed ?
-- Changes needed in VDSM to work with raw LUNs, today it only has
support for consuming LUNs via VG/LV.
1 LUN/nfs-export on the array == 1 storage domain
- How to represent a new vmdisk (aka vdsm volume) if its a LUN
provisioned using SAN target ?
-- Will it be VG/LV as is done today for block domains ?
-- If yes, then it will be difficult to exploit offload
capabilities, as they are at LUN level, not at LV level.
- Each new vmdisk will be a new file on the nfs-export, assuming
offload capability is available at the file level, so this should work
for NAS targets ?
- Can use the storage array for hosting multiple storage domains.
-- Provision one more LUN and use it for another storage domain
if need be.
- VDSM already supports this today, as part of block storage
domains for LUNs case.
Note that we will allow user to do either one of the two options above,
depending on need.
4.2) Storage domain metadata will also include the features/capabilities
of the storage array as reported by LSM.
- Capabilities (taken via LSM) will be stored in the domain
metadata during storage domain create flow.
- Need changes in oVirt engine as well ( see 'oVirt Engine
potential changes' section below )
4.3) VDSM to poll LSM for array capabilities on a regular basis ?
Per ayal:
- If we have a 'storage array' entity in oVirt Engine (see 'oVirt
Engine potential changes' section below ) then we can have a 'refresh
capabilities' button/verb.
- We can periodically query the storage array.
- Query LSM before running operations (sounds redundant to me, but
if it's cheap enough it could be simplest).
Probably need a combination of 1+2 (query at very low frequency -
1/hour or 1/day + refresh button)
5) oVirt Engine potential changes - as described by ayal :
- We will either need a new 'storage array' entity in engine to
keep credentials, or, in case of storage array as storage domain, just
keep this info as part of the domain at engine level.
- Have a 'storage array' entity in oVirt Engine to support
'refresh capabilities' as a button/verb.
- When user during storage provisioning, selects a LUN exported
from a storage array (via LSM), the oVirt Engine would know from then
onwards that this LUN is being served via LSM.
It would then be able to query the capabilities of the LUN and
show it to the virt admin during storage consumption flow.
6) Potential flows:
- Create snapshot flow
-- VDSM will check the snapshot offload capability in the
domain metadata
-- If available, and override is not configured, it will use
LSM to offload LUN/File snapshot
-- If override is configured or capability is not available, it
will use its internal logic to create
snapshot (qcow2).
- Copy/Clone vmdisk flow
-- VDSM will check the copy offload capability in the domain
metadata
-- If available, and override is not configured, it will use
LSM to offload LUN/File copy
-- If override is configured or capability is not available, it
will use its internal logic to create
snapshot (eg: dd cmd in case of LUN).
7) LSM potential changes:
- list features/capabilities of the array. Eg: copy offload, thin
prov. etc.
- list containers (aka pools) (present in LSM today)
- Ability to list different types of arrays being managed, their
capabilities and used/free space
- Ability to create/list/delete/resize volumes ( LUN or exports,
available in LSM as of today)
- Get monitoring info with object (LUN/snapshot/volume) as optional
parameter for specific info. eg: container/pool free/used space, raid
type etc.
Need to make sure above info is listed in a coherent way across arrays
(number of LUNs, raid type used? free/total per container/pool, per
LUN?. Also need I/O statistics wherever possible.
11 years, 11 months
Can somebody look why I don't have privilege to access bug #833425
by shuming@linux.vnet.ibm.com
Hi,
My account was registered by IBM email account <shuming(a)cn.ibm.com>.
This bug should be VDSM specific and I think we should have enough
privilege to access VDSM bugs.
--
Shu Ming<shuming(a)linux.vnet.ibm.com>
IBM China Systems and Technology Laboratory
11 years, 11 months
About code segment in vdsm/storage/image.py
by shuming@linux.vnet.ibm.com
Hi,
I am reading the code of merge() function in image.py and get one
question about the code. I wonder why we should get the volume
parameters from the parent of the destination volume if the parent is
existing instead from the destination volume itself?
1101 if dstParentUUID != sd.BLANK_UUID:
volParams = vols[dstParentUUID].getVolumeParams()
else:
1105 volParams = dstVol.getVolumeParams()
--
Shu Ming <shuming(a)linux.vnet.ibm.com>
IBM China Systems and Technology Laboratory
11 years, 11 months
[virt-node] RFC: API Supportability
by smizrahi@redhat.com
The first thing we need to decide is API supportabiliy. I'll list the questions
that need to be answered. The decision made here will have great effect on
transport selection (espscially API change process and versioning) so try and
think about this without going to specfic technicalities (eg. "X can't be done
on REST").
New API acceptance process
==========================
- What is the process to suggest new API calls?
- Who can ack such a change?
- Does someone have veto rights?
- Are there experimental APIs?
API deprecation process
=======================
- Do we allow deprecation?
- When can an API call be deprecated?
- Who can ack such a change?
- Does someone have veto rights?
API change process
==================
- Can calls be modified or no symbol can ever repeat in a different form
- When can an API call be deprecated?
- Who can ack such a change?
- Does someone have veto rights?
API versioning
==============
- Is the API versioned as a whole, is it per subsystem (storage, networking,
etc..) or is each call versioned by itself.
- What happens when old clients connects to a newer server.
- What happens when a new client connects to an older sever.
- How will versioning be expressed in the bindings?
- Do we retrict newer clients from using old APIs when talking with a new
server?
11 years, 12 months