Re: [vdsm] [Users] glusterfs and ovirt
by deepakcs@linux.vnet.ibm.com
On 05/17/2012 11:05 PM, Itamar Heim wrote:
> On 05/17/2012 06:55 PM, Bharata B Rao wrote:
>> On Wed, May 16, 2012 at 3:29 PM, Itamar Heim<iheim(a)redhat.com> wrote:
>>> On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
>>>>
>>>> Yair
>>>>
>>>> Thanks for an update. Can I have KVM hypervisors also function as
>>>> storage
>>>> nodes for glusterfs? What is a release date for glusterfs support?
>>>> We're
>>>> looking for a production deployment in June. Thanks
>>>
>>>
>>> current status is
>>> 1. patches for provisioning gluster clusters and volumes via ovirt
>>> are in
>>> review, trying to cover this feature set [1].
>>> I'm not sure if all of them will make the ovirt 3.1 version which is
>>> slated
>>> to branch for stabilization June 1st, but i think "enough" is there.
>>> so i'd start trying current upstream version to help find issues
>>> blocking
>>> you, and following on them during june as we stabilize ovirt 3.1 for
>>> release
>>> (planned for end of june).
>>>
>>> 2. you should be able to use same hosts for both gluster and virt,
>>> but there
>>> is no special logic/handling for this yet (i.e., trying and providing
>>> feedback would help improve this mode).
>>> I would suggest start from separate clusters though first, and only
>>> later
>>> trying the joint mode.
>>>
>>> 3. creating a storage domain on top of gluster:
>>> - expose NFS on top of it, and consume as a normal nfs storage domain
>>> - use posixfs storage domain with gluster mount semantics
>>> - future: probably native gluster storage domain, up to native
>>> integration with qemu
>>
>> I am looking at GlusterFS integration with QEMU which involves adding
>> GlusterFS as block backend in QEMU. This will involve QEMU talking to
>> gluster directly via libglusterfs bypassing FUSE. I could specify a
>> volume file and the VM image directly on QEMU command line to boot
>> from the VM image that resides on a gluster volume.
>>
>> Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster
>>
>> In this example, Fedora.img is being served by gluster and client.vol
>> would have client-side translators specified.
>>
>> I am not sure if this use case would be served if GlusterFS is
>> integrated as posixfs storage domain in VDSM. Posixfs would involve
>> normal FUSE mount and QEMU would be required to work with images from
>> FUSE mount path ?
>>
>> With QEMU supporting GlusterFS backend natively, further optimizations
>> are possible in case of gluster volume being local to the host node.
>> In this case, one could provide QEMU with a simple volume file that
>> would not contain client or server xlators, but instead just the posix
>> xlator. This would lead to most optimal IO path that bypasses RPC
>> calls.
>>
>> So do you think, this use case (QEMU supporting GlusterFS backend
>> natively and using volume file to specify the needed translators)
>> warrants a specialized storage domain type for GlusterFS in VDSM ?
>
> I'm not sure if a special storage domain, or a PosixFS based domain
> with enhanced capabilities.
> Ayal?
Related Question:
With QEMU using GlusterFS backend natively (as described above), it
also means that
it needs addnl options/parameters as part of qemu command line (as given
above).
How does VDSM today support generating a custom qemu cmdline. I know
VDSM talks to libvirt,
so is there a framework in VDSM to edit/modify the domxml based on some
pre-conditions,
and how / where one should hook up to do that modification ? I know of
libvirt hooks
framework in VDSM, but that was more for temporary/experimental needs,
or am i completely
wrong here ?
Irrespective of whether GlusterFS integrates into VDSM as PosixFS or
special storage domain
it won't address the need to generate a custom qemu cmdline if a
file/image was served by
GlusterFS. Whats the way to address this issue in VDSM ?
I am assuming here that special storage domain (aka repo engine) is only
to manage image
repository, and image related operations, won't help in modifying qemu
cmd line being generated.
[Ccing vdsm-devel also]
thanx,
deepak
12 years
spicec + vncviewer query
by Anil Vettathu
Hi,
I was able to get the details of the display of both spice and vnc using
vdsclient. Now how can I connect to the console using spicec or virtviewer.
spicec is failiing with the following log.
1338808977 INFO [32318:32318] Application::main: command line: spicec
--host 192.165.210.136 --port 5900 --secure-port 5901 --ca-file ca-cert.pem
1338808977 INFO [32318:32318] init_key_map: using evdev mapping
1338808979 INFO [32318:32318] MultyMonScreen::MultyMonScreen: platform_win:
77594625
1338808979 INFO [32318:32318] GUI::GUI:
1338808979 INFO [32318:32318] ForeignMenu::ForeignMenu: Creating a foreign
menu connection /tmp/SpiceForeignMenu-32318.uds
1338808979 INFO [32318:32319] RedPeer::connect_unsecure: Connected to
192.165.210.136 5900
1338808979 INFO [32318:32319] RedPeer::connect_secure: Connected to
192.165.210.136 5901
1338808979 WARN [32318:32319] RedChannel::run: connect failed 7
virt-viewer is failing due to authentication even though i use a password
set by vmticket.
Please note that the VMs are managed by ovirt
Is it mandatory that we need to use ovirt to connect to vm consoles?
Can someone guide me?
Thanks,
Anil
12 years
Agenda for today's call
by Dan Kenigsberg
Hi All,
I have fewer talk issues for today, please suggest others, or else the
call would be short and to the point!
- reviewers/verifiers are still missing for pep8 patches.
A branch was created, but not much action has taken place on it
http://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic:...
- Upcoming oVirt-3.1 release: version bump to 4.9.7? to 4.10?
- Vdsm/MOM integration: could we move MOM to gerrit.ovirt.org?
Regrads,
Dan.
12 years
Sanlock readonly device support issue
by wudxw@linux.vnet.ibm.com
Hi Federico,
I found that vdsm failed to create VM with a cdrom device after
configuring libvirt to use sanlock.
It's caused by that sanlock doesn't support readonly/shared disks now.
Libvirt has fixed the problem already. I file a vdsm bug for this
problem and more detailed information please see:
https://bugzilla.redhat.com/show_bug.cgi?id=828633
I would like to know if we should requires a libvirt version which
includes the fix. It has not been included in latest F17 libvirt
packages. So is it necessary to file a bug for libvirt package on fedora
to back port it?
Thanks.
Mark
12 years
Re: [vdsm] Fwd: RFC: Writeup on VDSM-libstoragemgmt integration
by deepakcs@linux.vnet.ibm.com
(for some reason i never recd. Adam's note tho' I am subscribed to all
the 3 lists Cc'ed here, strange !
Replying off from the mail fwd.ed to me from my colleague, pls see my
responses inline below. Thanks. )
>
>
> ---------- Forwarded message ----------
> From: *Adam Litke* <agl(a)us.ibm.com <mailto:agl@us.ibm.com>>
> Date: Thu, May 31, 2012 at 7:31 PM
> Subject: Re: [vdsm] RFC: Writeup on VDSM-libstoragemgmt integration
> To: Deepak C Shetty <deepakcs(a)linux.vnet.ibm.com
> <mailto:deepakcs@linux.vnet.ibm.com>>
> Cc: libstoragemgmt-devel(a)lists.sourceforge.net
> <mailto:libstoragemgmt-devel@lists.sourceforge.net>,
> engine-devel(a)ovirt.org <mailto:engine-devel@ovirt.org>, VDSM Project
> Development <vdsm-devel(a)lists.fedorahosted.org
> <mailto:vdsm-devel@lists.fedorahosted.org>>
>
>
> On Wed, May 30, 2012 at 03:08:46PM +0530, Deepak C Shetty wrote:
> > Hello All,
> >
> > I have a draft write-up on the VDSM-libstoragemgmt integration.
> > I wanted to run this thru' the mailing list(s) to help tune and
> > crystallize it, before putting it on the ovirt wiki.
> > I have run this once thru Ayal and Tony, so have some of their
> > comments incorporated.
> >
> > I still have few doubts/questions, which I have posted below with
> > lines ending with '?'
> >
> > Comments / Suggestions are welcome & appreciated.
> >
> > thanx,
> > deepak
> >
> > [Ccing engine-devel and libstoragemgmt lists as this stuff is
> > relevant to them too]
> >
> >
> --------------------------------------------------------------------------------------------------------------
> >
> > 1) Background:
> >
> > VDSM provides high level API for node virtualization management. It
> > acts in response to the requests sent by oVirt Engine, which uses
> > VDSM to do all node virtualization related tasks, including but not
> > limited to storage management.
> >
> > libstoragemgmt aims to provide vendor agnostic API for managing
> > external storage array. It should help system administrators
> > utilizing open source solutions have a way to programmatically
> > manage their storage hardware in a vendor neutral way. It also aims
> > to facilitate management automation, ease of use and take advantage
> > of storage vendor supported features which improve storage
> > performance and space utilization.
> >
> > Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
> >
> > libstoragemgmt (LSM) today supports C and python plugins for talking
> > to external storage array using SMI-S as well as native interfaces
> > (eg: netapp plugin )
> > Plan is to grow the SMI-S interface as needed over time and add more
> > vendor specific plugins for exploiting features not possible via
> > SMI-S or have better alternatives than using SMI-S.
> > For eg: Many of the copy offload features require to use vendor
> > specific commands, which justifies the need for a vendor specific
> > plugin.
> >
> >
> > 2) Goals:
> >
> > 2a) Ability to plugin external storage array into oVirt/VDSM
> > virtualization stack, in a vendor neutral way.
> >
> > 2b) Ability to list features/capabilities and other statistical
> > info of the array
> >
> > 2c) Ability to utilize the storage array offload capabilities
> > from oVirt/VDSM.
> >
> >
> > 3) Details:
> >
> > LSM will sit as a new repository engine in VDSM.
> > VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
> >
> > Current plan is to have LSM co-exist with VDSM on the virtualization
> nodes.
> >
> > *Note : 'storage' used below is generic. It can be a file/nfs-export
> > for NAS targets and LUN/logical-drive for SAN targets.
> >
> > VDSM can use LSM and do the following...
> > - Provision storage
> > - Consume storage
> >
> > 3.1) Provisioning Storage using LSM
> >
> > Typically this will be done by a Storage administrator.
> >
> > oVirt/VDSM should provide storage admin the
> > - ability to list the different storage arrays along with their
> > types (NAS/SAN), capabilities, free/used space.
> > - ability to provision storage using any of the array
> > capabilities (eg: thin provisioned lun or new NFS export )
> > - ability to manage the provisioned storage (eg: resize/delete
> storage)
>
> I guess vdsm will need to model a new type of object (perhaps
> StorageTarget) to
> be used for performing the above provisioning operations. Then, to
> consume the
> provisioned storage, we could create a StorageConnectionRef by passing
> in a
> StorageTarget object and some additional parameters. Sound about right?
Sounds right to me, but I am not an expert in VDSM object model,
Saggi/Ayal/Dan can provide
more inputs here. The (proposed) storage array entity in ovirt engine
can use this vdsm object to
communicate and work with the storage array in doing the provisioning work.
Going ahead with the change to new Image Repository, I was envisioning
that LSM when integrated as
a new repo engine will exhibit "Storage Provisioning" as a implicit
feature/capability, only then it
will be picked up by the StorageTarget, else not.
>
> > Once the storage is provisioned by the storage admin, VDSM will have
> > to refresh the host(s) for them to be able to see the newly
> > provisioned storage.
>
> How would this refresh affect currently connected storage and running VMs?
I am not too sure on this... looking for more info from the experts
here. Per ayal, getDeviceInfo
should help refresh, but by 'affect' are you referring to what happens
if post refresh the device
IDs and/or names of the existing storage on the host changes ? What
exactly is the concern here ?
>
> > 3.1.1) Potential flows:
> >
> > Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is
> > needed to make LUN available to list of hosts passed by mgmt
> > Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices)
> > Repeat above for all relevant hosts (depending on list passed
> > earlier, mostly relevant when extending an existing VG)
> > Mgmt -> use LUN in normal flows.
> >
> >
> > 3.1.2) How oVirt Engine will know which LSM to use ?
> >
> > Normally the way this works today is that user can choose the host
> > to use (default today is SPM), however there are a few flows where
> > mgmt will know which host to use:
> > 1. extend storage domain (add LUN to existing VG) - Use SPM and make
> > sure *all* hosts that need access to this SD can see the new LUN
> > 2. attach new LUN to a VM which is pinned to a specific host - use
> this host
> > 3. attach new LUN to a VM which is not pinned - use a host from the
> > cluster the VM belongs to and make sure all nodes in cluster can see
> > the new LUN
>
> You are still going to need to worry about locking the shared storage
> resource.
> Will libstoragemgmt have storage clustering support baked in or will
> we continue
> to rely on SPM? If the latter is true, most/all of these operations
> would still
> need to be done by SPM if I understand correctly.
The above scenarios were noted by me on behalf of Ayal.
I don't think LSM will worry abt storage clustering. We are just using
LSM to 'talk' with the
storage array. I am not sure if we need locking for the above scenarios.
We are just ensuring
that the newly provisioned LUN is visible to the relevant hosts, so not
sure why we might need
locking?
>
> > Flows for which there is no clear candidate (Maybe we can use the
> > SPM host itself which is the default ?)
> > 1. create a new disk without attaching it to any VM
> > 2. create a LUN for a new storage domain
>
> Yes, SPM would seem correct to me.
>
> > 3.2) Consuming storage using LSM
> >
> > Typically this will be done by a virtualization administrator
> >
> > oVirt/VDSM should allow virtualization admin to
> > - Create a new storage domain using the storage on the array.
> > - Be able to specify whether VDSM should use the storage offload
> > capability (default) or override it to use its own internal logic.
>
> If vdsm can make the right decisions, I would prefer that vdsm decides
> when to use
> hardware offload and when to use software algorithms without administrator
> intervention. It's another case where oVirt can provide value-add by
> simplifying the configuration and providing optimal performance.
Per ayal, the thought was that in scenarios we know where the storage
array implementation is
not optimal, we can override and tell VDSM to use its internal logic
than offload.
>
> > 4) VDSM potential changes:
> >
> > 4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk
> > ? which bring another question...1 array == 1 storage domain OR 1
> > LUN/nfs-export on the array == 1 storage domain ?
>
> Saggi has mentioned some ideas on this topic so I will encourage him
> to explain
> his thoughts here.
Looking forward to Saggi's thoughts :)
>
> >
> > Pros & Cons of each...
> >
> > 1 array == 1 storage domain
> > - Each new vmdisk (aka volume) will be a new lun/file on the array.
> > - Easier to exploit offload capabilities, as they are available
> > at the LUN/File granularity
> > - Will there be any issues where there will be too many
> > LUNs/Files ... any maxluns limit on linux hosts that we might hit ?
> > -- VDSM has been tested with 1K LUNs and it worked fine - ayal
> > - Storage array limitations on the number of LUNs can be a
> > downside here.
> > - Would it be ok to share the array for hosting another storage
> > domain if need be ?
> > -- Provided the existing domain is not utilising all of the
> > free space
> > -- We can create new LUNs and hand it over to anyone needed ?
> > -- Changes needed in VDSM to work with raw LUNs, today it only
> > has support for consuming LUNs via VG/LV.
> >
> > 1 LUN/nfs-export on the array == 1 storage domain
> > - How to represent a new vmdisk (aka vdsm volume) if its a LUN
> > provisioned using SAN target ?
> > -- Will it be VG/LV as is done today for block domains ?
> > -- If yes, then it will be difficult to exploit offload
> > capabilities, as they are at LUN level, not at LV level.
> > - Each new vmdisk will be a new file on the nfs-export, assuming
> > offload capability is available at the file level, so this should
> > work for NAS targets ?
> > - Can use the storage array for hosting multiple storage domains.
> > -- Provision one more LUN and use it for another storage
> > domain if need be.
> > - VDSM already supports this today, as part of block storage
> > domains for LUNs case.
> >
> > Note that we will allow user to do either one of the two options
> > above, depending on need.
> >
> > 4.2) Storage domain metadata will also include the
> > features/capabilities of the storage array as reported by LSM.
> > - Capabilities (taken via LSM) will be stored in the domain
> > metadata during storage domain create flow.
> > - Need changes in oVirt engine as well ( see 'oVirt Engine
> > potential changes' section below )
>
> Do we want to store the exact hw capabilities or some set of vdsm
> chosen feature
> bits that are set at create time based on the discovered hw
> capabilities? The
> difference would be that vdsm could choose which features to enable at
> create
> time and update those features later if needed.
IIUC, you are saying VDSM will only look for those capabilities, whcih
are of interest to it and
store it? That should be done by way LSM returning its capabilities as
part of it being a Image Repo.
I am referring to how localFSRepo (def capabilities) is shown in the PoC
Saggie posted @
http://gerrit.ovirt.org/#change,192
>
> > 4.3) VDSM to poll LSM for array capabilities on a regular basis ?
> > Per ayal:
> > - If we have a 'storage array' entity in oVirt Engine (see
> > 'oVirt Engine potential changes' section below ) then we can have a
> > 'refresh capabilities' button/verb.
> > - We can periodically query the storage array.
> > - Query LSM before running operations (sounds redundant to me,
> > but if it's cheap enough it could be simplest).
> >
> > Probably need a combination of 1+2 (query at very low frequency
> > - 1/hour or 1/day + refresh button)
>
> This problem can be aleviated by the abstraction I suggested above.
> Then, LSM
> can be queried only when we may want to adjust the policy connected with a
> particular storage target.
Not clear to me, can you explain more ?
LSM might need to be contacted for updating the capabilities, because
storage admins can add/remove
capabilities over a period of time. Many storage arrays provide ability
to enable/disable array
features on demand.
>
> > 5) oVirt Engine potential changes - as described by ayal :
> >
> > - We will either need a new 'storage array' entity in engine to
> > keep credentials, or, in case of storage array as storage domain,
> > just keep this info as part of the domain at engine level.
> > - Have a 'storage array' entity in oVirt Engine to support
> > 'refresh capabilities' as a button/verb.
> > - When user during storage provisioning, selects a LUN exported
> > from a storage array (via LSM), the oVirt Engine would know from
> > then onwards that this LUN is being served via LSM.
> > It would then be able to query the capabilities of the LUN
> > and show it to the virt admin during storage consumption flow.
> >
> > 6) Potential flows:
> > - Create snapshot flow
> > -- VDSM will check the snapshot offload capability in the
> > domain metadata
> > -- If available, and override is not configured, it will use
> > LSM to offload LUN/File snapshot
> > -- If override is configured or capability is not available,
> > it will use its internal logic to create
> > snapshot (qcow2).
> >
> > - Copy/Clone vmdisk flow
> > -- VDSM will check the copy offload capability in the domain
> > metadata
> > -- If available, and override is not configured, it will use
> > LSM to offload LUN/File copy
> > -- If override is configured or capability is not available,
> > it will use its internal logic to create
> > snapshot (eg: dd cmd in case of LUN).
> >
> > 7) LSM potential changes:
> >
> > - list features/capabilities of the array. Eg: copy offload,
> > thin prov. etc.
> > - list containers (aka pools) (present in LSM today)
> > - Ability to list different types of arrays being managed, their
> > capabilities and used/free space
> > - Ability to create/list/delete/resize volumes ( LUN or exports,
> > available in LSM as of today)
> > - Get monitoring info with object (LUN/snapshot/volume) as
> > optional parameter for specific info. eg: container/pool free/used
> > space, raid type etc.
> >
> > Need to make sure above info is listed in a coherent way across
> > arrays (number of LUNs, raid type used? free/total per
> > container/pool, per LUN?. Also need I/O statistics wherever
> > possible.
I forgot to add this in the original mail.. adding it now.
8) Concerns/Issues
- Per Tony of libstoragemgmt
-- Some additional things to consider.
-- Some of the array vendors may not allow multiple points of control at
the same time. e.g. you may not be able to have 2 or more nodes running
libStorageMgmt at the same time talking to the same array. NetApp
limits what things can be done concurrently.
-- LibStorageMgmt currently just provides the bits to control external
storage arrays. The plug-in daemon and the plug-ins themselves execute
unprivileged.
- How does the change from SPM to SDM will affect the above discussions ?
12 years
VDSM API/clientIF instance design issue
by wudxw@linux.vnet.ibm.com
Hi Guys,
Recently, I has been working on integrate MOM into VDSM. MOM needs to
use VDSM API to interact with it. But currently, it requires the
instance of clientIF to use vdsm API. Passing clientIF to MOM is not a
good choice since it's a vdsm internal object. So I try to remove the
parameter 'cif' from the interface definition and change to access the
globally unique clientIF instance in API.py.
To get the instance of clientIF, I add a decorator to clientIF to
change it into singleton. Actually, clientIF has been working as a
global single instance already. We just don't have an interface to get
it and so passing it as parameter instead. I think using singleton to
get the instance of clientIF is more clean.
Dan and Saggi already gave some comments in
http://gerrit.ovirt.org/#change,4839 Thanks for the reviewing! But I
think we need more discussion on it, so I post it here because gerrit
is not the appropriate to discuss a design issue.
Thanks !
Mark.
12 years