Hello All,
I have a draft write-up on the VDSM-libstoragemgmt integration. I wanted to run this thru' the mailing list(s) to help tune and crystallize it, before putting it on the ovirt wiki. I have run this once thru Ayal and Tony, so have some of their comments incorporated.
I still have few doubts/questions, which I have posted below with lines ending with '?'
Comments / Suggestions are welcome & appreciated.
thanx, deepak
[Ccing engine-devel and libstoragemgmt lists as this stuff is relevant to them too]
--------------------------------------------------------------------------------------------------------------
1) Background:
VDSM provides high level API for node virtualization management. It acts in response to the requests sent by oVirt Engine, which uses VDSM to do all node virtualization related tasks, including but not limited to storage management.
libstoragemgmt aims to provide vendor agnostic API for managing external storage array. It should help system administrators utilizing open source solutions have a way to programmatically manage their storage hardware in a vendor neutral way. It also aims to facilitate management automation, ease of use and take advantage of storage vendor supported features which improve storage performance and space utilization.
Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
libstoragemgmt (LSM) today supports C and python plugins for talking to external storage array using SMI-S as well as native interfaces (eg: netapp plugin ) Plan is to grow the SMI-S interface as needed over time and add more vendor specific plugins for exploiting features not possible via SMI-S or have better alternatives than using SMI-S. For eg: Many of the copy offload features require to use vendor specific commands, which justifies the need for a vendor specific plugin.
2) Goals:
2a) Ability to plugin external storage array into oVirt/VDSM virtualization stack, in a vendor neutral way.
2b) Ability to list features/capabilities and other statistical info of the array
2c) Ability to utilize the storage array offload capabilities from oVirt/VDSM.
3) Details:
LSM will sit as a new repository engine in VDSM. VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
Current plan is to have LSM co-exist with VDSM on the virtualization nodes.
*Note : 'storage' used below is generic. It can be a file/nfs-export for NAS targets and LUN/logical-drive for SAN targets.
VDSM can use LSM and do the following... - Provision storage - Consume storage
3.1) Provisioning Storage using LSM
Typically this will be done by a Storage administrator.
oVirt/VDSM should provide storage admin the - ability to list the different storage arrays along with their types (NAS/SAN), capabilities, free/used space. - ability to provision storage using any of the array capabilities (eg: thin provisioned lun or new NFS export ) - ability to manage the provisioned storage (eg: resize/delete storage)
Once the storage is provisioned by the storage admin, VDSM will have to refresh the host(s) for them to be able to see the newly provisioned storage.
3.1.1) Potential flows:
Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is needed to make LUN available to list of hosts passed by mgmt Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices) Repeat above for all relevant hosts (depending on list passed earlier, mostly relevant when extending an existing VG) Mgmt -> use LUN in normal flows.
3.1.2) How oVirt Engine will know which LSM to use ?
Normally the way this works today is that user can choose the host to use (default today is SPM), however there are a few flows where mgmt will know which host to use: 1. extend storage domain (add LUN to existing VG) - Use SPM and make sure *all* hosts that need access to this SD can see the new LUN 2. attach new LUN to a VM which is pinned to a specific host - use this host 3. attach new LUN to a VM which is not pinned - use a host from the cluster the VM belongs to and make sure all nodes in cluster can see the new LUN
Flows for which there is no clear candidate (Maybe we can use the SPM host itself which is the default ?) 1. create a new disk without attaching it to any VM 2. create a LUN for a new storage domain
3.2) Consuming storage using LSM
Typically this will be done by a virtualization administrator
oVirt/VDSM should allow virtualization admin to - Create a new storage domain using the storage on the array. - Be able to specify whether VDSM should use the storage offload capability (default) or override it to use its own internal logic.
4) VDSM potential changes:
4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk ? which bring another question...1 array == 1 storage domain OR 1 LUN/nfs-export on the array == 1 storage domain ?
Pros & Cons of each...
1 array == 1 storage domain - Each new vmdisk (aka volume) will be a new lun/file on the array. - Easier to exploit offload capabilities, as they are available at the LUN/File granularity - Will there be any issues where there will be too many LUNs/Files ... any maxluns limit on linux hosts that we might hit ? -- VDSM has been tested with 1K LUNs and it worked fine - ayal - Storage array limitations on the number of LUNs can be a downside here. - Would it be ok to share the array for hosting another storage domain if need be ? -- Provided the existing domain is not utilising all of the free space -- We can create new LUNs and hand it over to anyone needed ? -- Changes needed in VDSM to work with raw LUNs, today it only has support for consuming LUNs via VG/LV.
1 LUN/nfs-export on the array == 1 storage domain - How to represent a new vmdisk (aka vdsm volume) if its a LUN provisioned using SAN target ? -- Will it be VG/LV as is done today for block domains ? -- If yes, then it will be difficult to exploit offload capabilities, as they are at LUN level, not at LV level. - Each new vmdisk will be a new file on the nfs-export, assuming offload capability is available at the file level, so this should work for NAS targets ? - Can use the storage array for hosting multiple storage domains. -- Provision one more LUN and use it for another storage domain if need be. - VDSM already supports this today, as part of block storage domains for LUNs case.
Note that we will allow user to do either one of the two options above, depending on need.
4.2) Storage domain metadata will also include the features/capabilities of the storage array as reported by LSM. - Capabilities (taken via LSM) will be stored in the domain metadata during storage domain create flow. - Need changes in oVirt engine as well ( see 'oVirt Engine potential changes' section below )
4.3) VDSM to poll LSM for array capabilities on a regular basis ? Per ayal: - If we have a 'storage array' entity in oVirt Engine (see 'oVirt Engine potential changes' section below ) then we can have a 'refresh capabilities' button/verb. - We can periodically query the storage array. - Query LSM before running operations (sounds redundant to me, but if it's cheap enough it could be simplest).
Probably need a combination of 1+2 (query at very low frequency - 1/hour or 1/day + refresh button)
5) oVirt Engine potential changes - as described by ayal :
- We will either need a new 'storage array' entity in engine to keep credentials, or, in case of storage array as storage domain, just keep this info as part of the domain at engine level. - Have a 'storage array' entity in oVirt Engine to support 'refresh capabilities' as a button/verb. - When user during storage provisioning, selects a LUN exported from a storage array (via LSM), the oVirt Engine would know from then onwards that this LUN is being served via LSM. It would then be able to query the capabilities of the LUN and show it to the virt admin during storage consumption flow.
6) Potential flows: - Create snapshot flow -- VDSM will check the snapshot offload capability in the domain metadata -- If available, and override is not configured, it will use LSM to offload LUN/File snapshot -- If override is configured or capability is not available, it will use its internal logic to create snapshot (qcow2).
- Copy/Clone vmdisk flow -- VDSM will check the copy offload capability in the domain metadata -- If available, and override is not configured, it will use LSM to offload LUN/File copy -- If override is configured or capability is not available, it will use its internal logic to create snapshot (eg: dd cmd in case of LUN).
7) LSM potential changes:
- list features/capabilities of the array. Eg: copy offload, thin prov. etc. - list containers (aka pools) (present in LSM today) - Ability to list different types of arrays being managed, their capabilities and used/free space - Ability to create/list/delete/resize volumes ( LUN or exports, available in LSM as of today) - Get monitoring info with object (LUN/snapshot/volume) as optional parameter for specific info. eg: container/pool free/used space, raid type etc.
Need to make sure above info is listed in a coherent way across arrays (number of LUNs, raid type used? free/total per container/pool, per LUN?. Also need I/O statistics wherever possible.
On Wed, May 30, 2012 at 03:08:46PM +0530, Deepak C Shetty wrote:
Hello All,
I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and crystallize it, before putting it on the ovirt wiki. I have run this once thru Ayal and Tony, so have some of their comments incorporated.
I still have few doubts/questions, which I have posted below with lines ending with '?'
Comments / Suggestions are welcome & appreciated.
thanx, deepak
[Ccing engine-devel and libstoragemgmt lists as this stuff is relevant to them too]
- Background:
VDSM provides high level API for node virtualization management. It acts in response to the requests sent by oVirt Engine, which uses VDSM to do all node virtualization related tasks, including but not limited to storage management.
libstoragemgmt aims to provide vendor agnostic API for managing external storage array. It should help system administrators utilizing open source solutions have a way to programmatically manage their storage hardware in a vendor neutral way. It also aims to facilitate management automation, ease of use and take advantage of storage vendor supported features which improve storage performance and space utilization.
Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
libstoragemgmt (LSM) today supports C and python plugins for talking to external storage array using SMI-S as well as native interfaces (eg: netapp plugin ) Plan is to grow the SMI-S interface as needed over time and add more vendor specific plugins for exploiting features not possible via SMI-S or have better alternatives than using SMI-S. For eg: Many of the copy offload features require to use vendor specific commands, which justifies the need for a vendor specific plugin.
Goals:
2a) Ability to plugin external storage array into oVirt/VDSM
virtualization stack, in a vendor neutral way.
2b) Ability to list features/capabilities and other statistical
info of the array
2c) Ability to utilize the storage array offload capabilities
from oVirt/VDSM.
- Details:
LSM will sit as a new repository engine in VDSM. VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
Current plan is to have LSM co-exist with VDSM on the virtualization nodes.
*Note : 'storage' used below is generic. It can be a file/nfs-export for NAS targets and LUN/logical-drive for SAN targets.
VDSM can use LSM and do the following... - Provision storage - Consume storage
3.1) Provisioning Storage using LSM
Typically this will be done by a Storage administrator.
oVirt/VDSM should provide storage admin the - ability to list the different storage arrays along with their types (NAS/SAN), capabilities, free/used space. - ability to provision storage using any of the array capabilities (eg: thin provisioned lun or new NFS export ) - ability to manage the provisioned storage (eg: resize/delete storage)
I guess vdsm will need to model a new type of object (perhaps StorageTarget) to be used for performing the above provisioning operations. Then, to consume the provisioned storage, we could create a StorageConnectionRef by passing in a StorageTarget object and some additional parameters. Sound about right?
Once the storage is provisioned by the storage admin, VDSM will have to refresh the host(s) for them to be able to see the newly provisioned storage.
How would this refresh affect currently connected storage and running VMs?
3.1.1) Potential flows:
Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is needed to make LUN available to list of hosts passed by mgmt Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices) Repeat above for all relevant hosts (depending on list passed earlier, mostly relevant when extending an existing VG) Mgmt -> use LUN in normal flows.
3.1.2) How oVirt Engine will know which LSM to use ?
Normally the way this works today is that user can choose the host to use (default today is SPM), however there are a few flows where mgmt will know which host to use:
- extend storage domain (add LUN to existing VG) - Use SPM and make
sure *all* hosts that need access to this SD can see the new LUN 2. attach new LUN to a VM which is pinned to a specific host - use this host 3. attach new LUN to a VM which is not pinned - use a host from the cluster the VM belongs to and make sure all nodes in cluster can see the new LUN
You are still going to need to worry about locking the shared storage resource. Will libstoragemgmt have storage clustering support baked in or will we continue to rely on SPM? If the latter is true, most/all of these operations would still need to be done by SPM if I understand correctly.
Flows for which there is no clear candidate (Maybe we can use the SPM host itself which is the default ?)
- create a new disk without attaching it to any VM
- create a LUN for a new storage domain
Yes, SPM would seem correct to me.
3.2) Consuming storage using LSM
Typically this will be done by a virtualization administrator
oVirt/VDSM should allow virtualization admin to - Create a new storage domain using the storage on the array. - Be able to specify whether VDSM should use the storage offload capability (default) or override it to use its own internal logic.
If vdsm can make the right decisions, I would prefer that vdsm decides when to use hardware offload and when to use software algorithms without administrator intervention. It's another case where oVirt can provide value-add by simplifying the configuration and providing optimal performance.
- VDSM potential changes:
4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk ? which bring another question...1 array == 1 storage domain OR 1 LUN/nfs-export on the array == 1 storage domain ?
Saggi has mentioned some ideas on this topic so I will encourage him to explain his thoughts here.
Pros & Cons of each...
1 array == 1 storage domain - Each new vmdisk (aka volume) will be a new lun/file on the array. - Easier to exploit offload capabilities, as they are available at the LUN/File granularity - Will there be any issues where there will be too many LUNs/Files ... any maxluns limit on linux hosts that we might hit ? -- VDSM has been tested with 1K LUNs and it worked fine - ayal - Storage array limitations on the number of LUNs can be a downside here. - Would it be ok to share the array for hosting another storage domain if need be ? -- Provided the existing domain is not utilising all of the free space -- We can create new LUNs and hand it over to anyone needed ? -- Changes needed in VDSM to work with raw LUNs, today it only has support for consuming LUNs via VG/LV.
1 LUN/nfs-export on the array == 1 storage domain - How to represent a new vmdisk (aka vdsm volume) if its a LUN provisioned using SAN target ? -- Will it be VG/LV as is done today for block domains ? -- If yes, then it will be difficult to exploit offload capabilities, as they are at LUN level, not at LV level. - Each new vmdisk will be a new file on the nfs-export, assuming offload capability is available at the file level, so this should work for NAS targets ? - Can use the storage array for hosting multiple storage domains. -- Provision one more LUN and use it for another storage domain if need be. - VDSM already supports this today, as part of block storage domains for LUNs case.
Note that we will allow user to do either one of the two options above, depending on need.
4.2) Storage domain metadata will also include the features/capabilities of the storage array as reported by LSM. - Capabilities (taken via LSM) will be stored in the domain metadata during storage domain create flow. - Need changes in oVirt engine as well ( see 'oVirt Engine potential changes' section below )
Do we want to store the exact hw capabilities or some set of vdsm chosen feature bits that are set at create time based on the discovered hw capabilities? The difference would be that vdsm could choose which features to enable at create time and update those features later if needed.
4.3) VDSM to poll LSM for array capabilities on a regular basis ? Per ayal: - If we have a 'storage array' entity in oVirt Engine (see 'oVirt Engine potential changes' section below ) then we can have a 'refresh capabilities' button/verb. - We can periodically query the storage array. - Query LSM before running operations (sounds redundant to me, but if it's cheap enough it could be simplest).
Probably need a combination of 1+2 (query at very low frequency
- 1/hour or 1/day + refresh button)
This problem can be aleviated by the abstraction I suggested above. Then, LSM can be queried only when we may want to adjust the policy connected with a particular storage target.
oVirt Engine potential changes - as described by ayal :
- We will either need a new 'storage array' entity in engine to
keep credentials, or, in case of storage array as storage domain, just keep this info as part of the domain at engine level. - Have a 'storage array' entity in oVirt Engine to support 'refresh capabilities' as a button/verb. - When user during storage provisioning, selects a LUN exported from a storage array (via LSM), the oVirt Engine would know from then onwards that this LUN is being served via LSM. It would then be able to query the capabilities of the LUN and show it to the virt admin during storage consumption flow.
- Potential flows:
- Create snapshot flow -- VDSM will check the snapshot offload capability in the
domain metadata -- If available, and override is not configured, it will use LSM to offload LUN/File snapshot -- If override is configured or capability is not available, it will use its internal logic to create snapshot (qcow2).
- Copy/Clone vmdisk flow -- VDSM will check the copy offload capability in the domain
metadata -- If available, and override is not configured, it will use LSM to offload LUN/File copy -- If override is configured or capability is not available, it will use its internal logic to create snapshot (eg: dd cmd in case of LUN).
LSM potential changes:
- list features/capabilities of the array. Eg: copy offload,
thin prov. etc. - list containers (aka pools) (present in LSM today) - Ability to list different types of arrays being managed, their capabilities and used/free space - Ability to create/list/delete/resize volumes ( LUN or exports, available in LSM as of today) - Get monitoring info with object (LUN/snapshot/volume) as optional parameter for specific info. eg: container/pool free/used space, raid type etc.
Need to make sure above info is listed in a coherent way across arrays (number of LUNs, raid type used? free/total per container/pool, per LUN?. Also need I/O statistics wherever possible.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
On 2012-5-30 17:38, Deepak C Shetty wrote:
Hello All,
I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and crystallize it, before putting it on the ovirt wiki. I have run this once thru Ayal and Tony, so have some of their comments incorporated.
I still have few doubts/questions, which I have posted below with lines ending with '?'
Comments / Suggestions are welcome & appreciated.
thanx, deepak
[Ccing engine-devel and libstoragemgmt lists as this stuff is relevant to them too]
- Background:
VDSM provides high level API for node virtualization management. It acts in response to the requests sent by oVirt Engine, which uses VDSM to do all node virtualization related tasks, including but not limited to storage management.
libstoragemgmt aims to provide vendor agnostic API for managing external storage array. It should help system administrators utilizing open source solutions have a way to programmatically manage their storage hardware in a vendor neutral way. It also aims to facilitate management automation, ease of use and take advantage of storage vendor supported features which improve storage performance and space utilization.
Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
libstoragemgmt (LSM) today supports C and python plugins for talking to external storage array using SMI-S as well as native interfaces (eg: netapp plugin ) Plan is to grow the SMI-S interface as needed over time and add more vendor specific plugins for exploiting features not possible via SMI-S or have better alternatives than using SMI-S. For eg: Many of the copy offload features require to use vendor specific commands, which justifies the need for a vendor specific plugin.
Goals:
2a) Ability to plugin external storage array into oVirt/VDSM
virtualization stack, in a vendor neutral way.
2b) Ability to list features/capabilities and other statistical
info of the array
2c) Ability to utilize the storage array offload capabilities from
oVirt/VDSM.
- Details:
LSM will sit as a new repository engine in VDSM. VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
Current plan is to have LSM co-exist with VDSM on the virtualization nodes.
Does that mean LSM will be a different daemon process than VDSM? Also, how about the vendor's plugin, another process in the nodes?
*Note : 'storage' used below is generic. It can be a file/nfs-export for NAS targets and LUN/logical-drive for SAN targets.
VDSM can use LSM and do the following... - Provision storage - Consume storage
3.1) Provisioning Storage using LSM
Typically this will be done by a Storage administrator.
oVirt/VDSM should provide storage admin the - ability to list the different storage arrays along with their types (NAS/SAN), capabilities, free/used space. - ability to provision storage using any of the array capabilities (eg: thin provisioned lun or new NFS export ) - ability to manage the provisioned storage (eg: resize/delete storage)
Once the storage is provisioned by the storage admin, VDSM will have to refresh the host(s) for them to be able to see the newly provisioned storage.
3.1.1) Potential flows:
Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is needed to make LUN available to list of hosts passed by mgmt Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices) Repeat above for all relevant hosts (depending on list passed earlier, mostly relevant when extending an existing VG) Mgmt -> use LUN in normal flows.
3.1.2) How oVirt Engine will know which LSM to use ?
Normally the way this works today is that user can choose the host to use (default today is SPM), however there are a few flows where mgmt will know which host to use:
- extend storage domain (add LUN to existing VG) - Use SPM and make
sure *all* hosts that need access to this SD can see the new LUN 2. attach new LUN to a VM which is pinned to a specific host - use this host 3. attach new LUN to a VM which is not pinned - use a host from the cluster the VM belongs to and make sure all nodes in cluster can see the new LUN
So this model depend on the work of removing storage pool?
Flows for which there is no clear candidate (Maybe we can use the SPM host itself which is the default ?)
- create a new disk without attaching it to any VM
So the new floating disk should be exported to all nodes and all VMs?
- create a LUN for a new storage domain
3.2) Consuming storage using LSM
Typically this will be done by a virtualization administrator
oVirt/VDSM should allow virtualization admin to - Create a new storage domain using the storage on the array. - Be able to specify whether VDSM should use the storage offload capability (default) or override it to use its own internal logic.
- VDSM potential changes:
4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk ? which bring another question...1 array == 1 storage domain OR 1 LUN/nfs-export on the array == 1 storage domain ?
Pros & Cons of each...
1 array == 1 storage domain - Each new vmdisk (aka volume) will be a new lun/file on the array. - Easier to exploit offload capabilities, as they are available at the LUN/File granularity - Will there be any issues where there will be too many LUNs/Files ... any maxluns limit on linux hosts that we might hit ? -- VDSM has been tested with 1K LUNs and it worked fine - ayal - Storage array limitations on the number of LUNs can be a downside here. - Would it be ok to share the array for hosting another storage domain if need be ? -- Provided the existing domain is not utilising all of the free space -- We can create new LUNs and hand it over to anyone needed ? -- Changes needed in VDSM to work with raw LUNs, today it only has support for consuming LUNs via VG/LV.
1 LUN/nfs-export on the array == 1 storage domain - How to represent a new vmdisk (aka vdsm volume) if its a LUN provisioned using SAN target ? -- Will it be VG/LV as is done today for block domains ? -- If yes, then it will be difficult to exploit offload capabilities, as they are at LUN level, not at LV level. - Each new vmdisk will be a new file on the nfs-export, assuming offload capability is available at the file level, so this should work for NAS targets ? - Can use the storage array for hosting multiple storage domains. -- Provision one more LUN and use it for another storage domain if need be. - VDSM already supports this today, as part of block storage domains for LUNs case.
Note that we will allow user to do either one of the two options above, depending on need.
4.2) Storage domain metadata will also include the features/capabilities of the storage array as reported by LSM. - Capabilities (taken via LSM) will be stored in the domain metadata during storage domain create flow. - Need changes in oVirt engine as well ( see 'oVirt Engine potential changes' section below )
4.3) VDSM to poll LSM for array capabilities on a regular basis ? Per ayal: - If we have a 'storage array' entity in oVirt Engine (see 'oVirt Engine potential changes' section below ) then we can have a 'refresh capabilities' button/verb. - We can periodically query the storage array. - Query LSM before running operations (sounds redundant to me, but if it's cheap enough it could be simplest).
Probably need a combination of 1+2 (query at very low frequency -
1/hour or 1/day + refresh button)
oVirt Engine potential changes - as described by ayal :
- We will either need a new 'storage array' entity in engine to
keep credentials, or, in case of storage array as storage domain, just keep this info as part of the domain at engine level. - Have a 'storage array' entity in oVirt Engine to support 'refresh capabilities' as a button/verb. - When user during storage provisioning, selects a LUN exported from a storage array (via LSM), the oVirt Engine would know from then onwards that this LUN is being served via LSM. It would then be able to query the capabilities of the LUN and show it to the virt admin during storage consumption flow.
- Potential flows:
- Create snapshot flow -- VDSM will check the snapshot offload capability in the
domain metadata -- If available, and override is not configured, it will use LSM to offload LUN/File snapshot
If a LSM try to snapshot a running volume, does that mean all the IO activity to the volume will be blocked when the snapshot is undergoing?
-- If override is configured or capability is not available,
it will use its internal logic to create snapshot (qcow2).
- Copy/Clone vmdisk flow -- VDSM will check the copy offload capability in the domain
metadata -- If available, and override is not configured, it will use LSM to offload LUN/File copy -- If override is configured or capability is not available, it will use its internal logic to create snapshot (eg: dd cmd in case of LUN).
LSM potential changes:
- list features/capabilities of the array. Eg: copy offload, thin
prov. etc. - list containers (aka pools) (present in LSM today) - Ability to list different types of arrays being managed, their capabilities and used/free space - Ability to create/list/delete/resize volumes ( LUN or exports, available in LSM as of today) - Get monitoring info with object (LUN/snapshot/volume) as optional parameter for specific info. eg: container/pool free/used space, raid type etc.
Need to make sure above info is listed in a coherent way across arrays (number of LUNs, raid type used? free/total per container/pool, per LUN?. Also need I/O statistics wherever possible.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
On 06/18/2012 09:26 PM, Shu Ming wrote:
On 2012-5-30 17:38, Deepak C Shetty wrote:
Hello All,
I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and crystallize it, before putting it on the ovirt wiki. I have run this once thru Ayal and Tony, so have some of their comments incorporated.
I still have few doubts/questions, which I have posted below with lines ending with '?'
Comments / Suggestions are welcome & appreciated.
thanx, deepak
[Ccing engine-devel and libstoragemgmt lists as this stuff is relevant to them too]
- Background:
VDSM provides high level API for node virtualization management. It acts in response to the requests sent by oVirt Engine, which uses VDSM to do all node virtualization related tasks, including but not limited to storage management.
libstoragemgmt aims to provide vendor agnostic API for managing external storage array. It should help system administrators utilizing open source solutions have a way to programmatically manage their storage hardware in a vendor neutral way. It also aims to facilitate management automation, ease of use and take advantage of storage vendor supported features which improve storage performance and space utilization.
Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
libstoragemgmt (LSM) today supports C and python plugins for talking to external storage array using SMI-S as well as native interfaces (eg: netapp plugin ) Plan is to grow the SMI-S interface as needed over time and add more vendor specific plugins for exploiting features not possible via SMI-S or have better alternatives than using SMI-S. For eg: Many of the copy offload features require to use vendor specific commands, which justifies the need for a vendor specific plugin.
Goals:
2a) Ability to plugin external storage array into oVirt/VDSM
virtualization stack, in a vendor neutral way.
2b) Ability to list features/capabilities and other statistical
info of the array
2c) Ability to utilize the storage array offload capabilities
from oVirt/VDSM.
- Details:
LSM will sit as a new repository engine in VDSM. VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
Current plan is to have LSM co-exist with VDSM on the virtualization nodes.
Does that mean LSM will be a different daemon process than VDSM? Also, how about the vendor's plugin, another process in the nodes?
Pls see the LSM homepage on sourceforge.net on how LSM works. It already has a lsmd ( daemon) which invokes the appropriate plugin based on the URI prefix. vendor plugins in LSM are supported in LSM as a .py module, which is invoked based on the URI prefix, which will be vendor specific. See the netapp vendor plugin.py in LSM source.
*Note : 'storage' used below is generic. It can be a file/nfs-export for NAS targets and LUN/logical-drive for SAN targets.
VDSM can use LSM and do the following... - Provision storage - Consume storage
3.1) Provisioning Storage using LSM
Typically this will be done by a Storage administrator.
oVirt/VDSM should provide storage admin the - ability to list the different storage arrays along with their types (NAS/SAN), capabilities, free/used space. - ability to provision storage using any of the array capabilities (eg: thin provisioned lun or new NFS export ) - ability to manage the provisioned storage (eg: resize/delete storage)
Once the storage is provisioned by the storage admin, VDSM will have to refresh the host(s) for them to be able to see the newly provisioned storage.
3.1.1) Potential flows:
Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is needed to make LUN available to list of hosts passed by mgmt Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices) Repeat above for all relevant hosts (depending on list passed earlier, mostly relevant when extending an existing VG) Mgmt -> use LUN in normal flows.
3.1.2) How oVirt Engine will know which LSM to use ?
Normally the way this works today is that user can choose the host to use (default today is SPM), however there are a few flows where mgmt will know which host to use:
- extend storage domain (add LUN to existing VG) - Use SPM and make
sure *all* hosts that need access to this SD can see the new LUN 2. attach new LUN to a VM which is pinned to a specific host - use this host 3. attach new LUN to a VM which is not pinned - use a host from the cluster the VM belongs to and make sure all nodes in cluster can see the new LUN
So this model depend on the work of removing storage pool?
I am not sure and want the experts to comment here. I am not very clear yet on how things will work post SPM is gone. Here its assumed SPM is present.
Flows for which there is no clear candidate (Maybe we can use the SPM host itself which is the default ?)
- create a new disk without attaching it to any VM
So the new floating disk should be exported to all nodes and all VMs?
- create a LUN for a new storage domain
3.2) Consuming storage using LSM
Typically this will be done by a virtualization administrator
oVirt/VDSM should allow virtualization admin to - Create a new storage domain using the storage on the array. - Be able to specify whether VDSM should use the storage offload capability (default) or override it to use its own internal logic.
- VDSM potential changes:
4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk ? which bring another question...1 array == 1 storage domain OR 1 LUN/nfs-export on the array == 1 storage domain ?
Pros & Cons of each...
1 array == 1 storage domain - Each new vmdisk (aka volume) will be a new lun/file on the array. - Easier to exploit offload capabilities, as they are available at the LUN/File granularity - Will there be any issues where there will be too many LUNs/Files ... any maxluns limit on linux hosts that we might hit ? -- VDSM has been tested with 1K LUNs and it worked fine - ayal - Storage array limitations on the number of LUNs can be a downside here. - Would it be ok to share the array for hosting another storage domain if need be ? -- Provided the existing domain is not utilising all of the free space -- We can create new LUNs and hand it over to anyone needed ? -- Changes needed in VDSM to work with raw LUNs, today it only has support for consuming LUNs via VG/LV.
1 LUN/nfs-export on the array == 1 storage domain - How to represent a new vmdisk (aka vdsm volume) if its a LUN provisioned using SAN target ? -- Will it be VG/LV as is done today for block domains ? -- If yes, then it will be difficult to exploit offload capabilities, as they are at LUN level, not at LV level. - Each new vmdisk will be a new file on the nfs-export, assuming offload capability is available at the file level, so this should work for NAS targets ? - Can use the storage array for hosting multiple storage domains. -- Provision one more LUN and use it for another storage domain if need be. - VDSM already supports this today, as part of block storage domains for LUNs case.
Note that we will allow user to do either one of the two options above, depending on need.
4.2) Storage domain metadata will also include the features/capabilities of the storage array as reported by LSM. - Capabilities (taken via LSM) will be stored in the domain metadata during storage domain create flow. - Need changes in oVirt engine as well ( see 'oVirt Engine potential changes' section below )
4.3) VDSM to poll LSM for array capabilities on a regular basis ? Per ayal: - If we have a 'storage array' entity in oVirt Engine (see 'oVirt Engine potential changes' section below ) then we can have a 'refresh capabilities' button/verb. - We can periodically query the storage array. - Query LSM before running operations (sounds redundant to me, but if it's cheap enough it could be simplest).
Probably need a combination of 1+2 (query at very low frequency -
1/hour or 1/day + refresh button)
oVirt Engine potential changes - as described by ayal :
- We will either need a new 'storage array' entity in engine to
keep credentials, or, in case of storage array as storage domain, just keep this info as part of the domain at engine level. - Have a 'storage array' entity in oVirt Engine to support 'refresh capabilities' as a button/verb. - When user during storage provisioning, selects a LUN exported from a storage array (via LSM), the oVirt Engine would know from then onwards that this LUN is being served via LSM. It would then be able to query the capabilities of the LUN and show it to the virt admin during storage consumption flow.
- Potential flows:
- Create snapshot flow -- VDSM will check the snapshot offload capability in the
domain metadata -- If available, and override is not configured, it will use LSM to offload LUN/File snapshot
If a LSM try to snapshot a running volume, does that mean all the IO activity to the volume will be blocked when the snapshot is undergoing?
If VDSM offloads the snapshot to the array ( via LSM) the array will take care of the snapshotting.... typically i believe it will quiese the I/O temporarily for few ms ? and take a point-in-time copy of the LUN/File and resume the I/O... i think it will happen transparent to the vdsm/host.
-- If override is configured or capability is not available,
it will use its internal logic to create snapshot (qcow2).
- Copy/Clone vmdisk flow -- VDSM will check the copy offload capability in the domain
metadata -- If available, and override is not configured, it will use LSM to offload LUN/File copy -- If override is configured or capability is not available, it will use its internal logic to create snapshot (eg: dd cmd in case of LUN).
LSM potential changes:
- list features/capabilities of the array. Eg: copy offload, thin
prov. etc. - list containers (aka pools) (present in LSM today) - Ability to list different types of arrays being managed, their capabilities and used/free space - Ability to create/list/delete/resize volumes ( LUN or exports, available in LSM as of today) - Get monitoring info with object (LUN/snapshot/volume) as optional parameter for specific info. eg: container/pool free/used space, raid type etc.
Need to make sure above info is listed in a coherent way across arrays (number of LUNs, raid type used? free/total per container/pool, per LUN?. Also need I/O statistics wherever possible.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
First of all I'd like to suggest not using the LSM acronym as it can also mean live-storage-migration and maybe other things.
Secondly I would like to avoid talking about what needs to be changed in VDSM before we figure out what exactly we want to accomplish.
Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up?
In the array as domain case. How are the luns being mapped to initiators. What about setting discovery credentials. In the array set up case. How will the hosts be represented in regards to credentials? How will the different schemes and capabilities in regard to authentication methods will be expressed.
Rest of the comments inline
----- Original Message -----
From: "Deepak C Shetty" deepakcs@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org Sent: Wednesday, May 30, 2012 5:38:46 AM Subject: [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
Hello All,
I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and crystallize it, before putting it on the ovirt wiki. I have run this once thru Ayal and Tony, so have some of their comments incorporated.
I still have few doubts/questions, which I have posted below with lines ending with '?'
Comments / Suggestions are welcome & appreciated.
thanx, deepak
[Ccing engine-devel and libstoragemgmt lists as this stuff is relevant to them too]
- Background:
VDSM provides high level API for node virtualization management. It acts in response to the requests sent by oVirt Engine, which uses VDSM to do all node virtualization related tasks, including but not limited to storage management.
libstoragemgmt aims to provide vendor agnostic API for managing external storage array. It should help system administrators utilizing open source solutions have a way to programmatically manage their storage hardware in a vendor neutral way. It also aims to facilitate management automation, ease of use and take advantage of storage vendor supported features which improve storage performance and space utilization.
Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
libstoragemgmt (LSM) today supports C and python plugins for talking to external storage array using SMI-S as well as native interfaces (eg: netapp plugin ) Plan is to grow the SMI-S interface as needed over time and add more vendor specific plugins for exploiting features not possible via SMI-S or have better alternatives than using SMI-S. For eg: Many of the copy offload features require to use vendor specific commands, which justifies the need for a vendor specific plugin.
Goals:
2a) Ability to plugin external storage array into oVirt/VDSM
virtualization stack, in a vendor neutral way.
2b) Ability to list features/capabilities and other statistical
info of the array
2c) Ability to utilize the storage array offload capabilities from
oVirt/VDSM.
- Details:
LSM will sit as a new repository engine in VDSM. VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
Current plan is to have LSM co-exist with VDSM on the virtualization nodes.
*Note : 'storage' used below is generic. It can be a file/nfs-export for NAS targets and LUN/logical-drive for SAN targets.
VDSM can use LSM and do the following... - Provision storage - Consume storage
3.1) Provisioning Storage using LSM
Typically this will be done by a Storage administrator.
oVirt/VDSM should provide storage admin the - ability to list the different storage arrays along with their types (NAS/SAN), capabilities, free/used space. - ability to provision storage using any of the array capabilities (eg: thin provisioned lun or new NFS export ) - ability to manage the provisioned storage (eg: resize/delete storage)
Once the storage is provisioned by the storage admin, VDSM will have to refresh the host(s) for them to be able to see the newly provisioned storage.
[SM] What about the clustered case, The management or the mailbox will have to be involved. Pros\Cons? Is there a capability for the storage to announce a change in topology? Can libstoragemgmt consume it? Does it even make sense?
3.1.1) Potential flows:
Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is needed to make LUN available to list of hosts passed by mgmt Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices) Repeat above for all relevant hosts (depending on list passed earlier, mostly relevant when extending an existing VG) Mgmt -> use LUN in normal flows.
[SM] This is all a bit vague in my opinion, concrete cases might prove more beneficial.
3.1.2) How oVirt Engine will know which LSM to use ?
Normally the way this works today is that user can choose the host to use (default today is SPM), however there are a few flows where mgmt will know which host to use:
- extend storage domain (add LUN to existing VG) - Use SPM and make
sure *all* hosts that need access to this SD can see the new LUN 2. attach new LUN to a VM which is pinned to a specific host - use this host 3. attach new LUN to a VM which is not pinned - use a host from the cluster the VM belongs to and make sure all nodes in cluster can see the new LUN
Flows for which there is no clear candidate (Maybe we can use the SPM host itself which is the default ?)
- create a new disk without attaching it to any VM
- create a LUN for a new storage domain
[SM] Maybe the engine should do the work? What about permission? Will all hosts have the credentials to mess with the storage? Will they be passed on a per call basis to prevent other users from having access to the storage?
3.2) Consuming storage using LSM
Typically this will be done by a virtualization administrator
oVirt/VDSM should allow virtualization admin to - Create a new storage domain using the storage on the array. - Be able to specify whether VDSM should use the storage offload capability (default) or override it to use its own internal logic.
- VDSM potential changes:
4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk
I find this hard to understand. Maybe a different notation? In ant case there is an abstracted case ie. storage domain. And there is a direct case ie. user provision luns to be used by VDSM and others as well. The will both have different ways of representing the same underlying objects. Also, I think that credentials might be tricky to represent as different arrays use different schemes to allocated users\hosts to luns\targets.
? which bring another question...1 array == 1 storage domain OR 1 LUN/nfs-export on the array == 1 storage domain ?
Pros & Cons of each...
1 array == 1 storage domain - Each new vmdisk (aka volume) will be a new lun/file on the array. - Easier to exploit offload capabilities, as they are available at the LUN/File granularity - Will there be any issues where there will be too many LUNs/Files ... any maxluns limit on linux hosts that we might hit ? -- VDSM has been tested with 1K LUNs and it worked fine - ayal - Storage array limitations on the number of LUNs can be a downside here. - Would it be ok to share the array for hosting another storage domain if need be ? -- Provided the existing domain is not utilising all of the free space -- We can create new LUNs and hand it over to anyone needed ? -- Changes needed in VDSM to work with raw LUNs, today it only has support for consuming LUNs via VG/LV.
1 LUN/nfs-export on the array == 1 storage domain - How to represent a new vmdisk (aka vdsm volume) if its a LUN provisioned using SAN target ? -- Will it be VG/LV as is done today for block domains ? -- If yes, then it will be difficult to exploit offload capabilities, as they are at LUN level, not at LV level. - Each new vmdisk will be a new file on the nfs-export, assuming offload capability is available at the file level, so this should work for NAS targets ? - Can use the storage array for hosting multiple storage domains. -- Provision one more LUN and use it for another storage domain if need be. - VDSM already supports this today, as part of block storage domains for LUNs case.
Note that we will allow user to do either one of the two options above, depending on need.
4.2) Storage domain metadata will also include the features/capabilities of the storage array as reported by LSM. - Capabilities (taken via LSM) will be stored in the domain metadata during storage domain create flow. - Need changes in oVirt engine as well ( see 'oVirt Engine potential changes' section below )
4.3) VDSM to poll LSM for array capabilities on a regular basis ? Per ayal: - If we have a 'storage array' entity in oVirt Engine (see 'oVirt Engine potential changes' section below ) then we can have a 'refresh capabilities' button/verb. - We can periodically query the storage array. - Query LSM before running operations (sounds redundant to me, but if it's cheap enough it could be simplest).
Probably need a combination of 1+2 (query at very low frequency -
1/hour or 1/day + refresh button)
oVirt Engine potential changes - as described by ayal :
- We will either need a new 'storage array' entity in engine to
keep credentials, or, in case of storage array as storage domain, just keep this info as part of the domain at engine level. - Have a 'storage array' entity in oVirt Engine to support 'refresh capabilities' as a button/verb. - When user during storage provisioning, selects a LUN exported from a storage array (via LSM), the oVirt Engine would know from then onwards that this LUN is being served via LSM. It would then be able to query the capabilities of the LUN and show it to the virt admin during storage consumption flow.
- Potential flows:
- Create snapshot flow -- VDSM will check the snapshot offload capability in the
domain metadata -- If available, and override is not configured, it will use LSM to offload LUN/File snapshot -- If override is configured or capability is not available, it will use its internal logic to create snapshot (qcow2).
- Copy/Clone vmdisk flow -- VDSM will check the copy offload capability in the domain
metadata -- If available, and override is not configured, it will use LSM to offload LUN/File copy -- If override is configured or capability is not available, it will use its internal logic to create snapshot (eg: dd cmd in case of LUN).
LSM potential changes:
- list features/capabilities of the array. Eg: copy offload,
thin
prov. etc. - list containers (aka pools) (present in LSM today) - Ability to list different types of arrays being managed, their capabilities and used/free space - Ability to create/list/delete/resize volumes ( LUN or exports, available in LSM as of today) - Get monitoring info with object (LUN/snapshot/volume) as optional parameter for specific info. eg: container/pool free/used space, raid type etc.
Need to make sure above info is listed in a coherent way across arrays (number of LUNs, raid type used? free/total per container/pool, per LUN?. Also need I/O statistics wherever possible.
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up?
It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun).
Is this usage model made difficult or impossible by the current software architecture?
Thanks -- Regards -- Andy
On 06/23/2012 02:31 AM, Andy Grover wrote:
On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up?
It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun).
Is this usage model made difficult or impossible by the current software architecture?
what about live snapshots?
On 06/22/2012 04:46 PM, Itamar Heim wrote:
On 06/23/2012 02:31 AM, Andy Grover wrote:
On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up?
It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun).
Is this usage model made difficult or impossible by the current software architecture?
what about live snapshots?
I'm not a virt guy, so extreme handwaving:
vm X uses luns 1 & 2
engine -> vdsm "pause vm X" engine -> libstoragemgmt "snapshot luns 1, 2 to luns 3, 4" engine -> vdsm "snapshot running state of X to Y" engine -> vdsm "unpause vm X" engine -> vdsm "change Y to use luns 3, 4"
?
-- Andy
On 06/23/2012 03:09 AM, Andy Grover wrote:
On 06/22/2012 04:46 PM, Itamar Heim wrote:
On 06/23/2012 02:31 AM, Andy Grover wrote:
On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up?
It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun).
Is this usage model made difficult or impossible by the current software architecture?
what about live snapshots?
I'm not a virt guy, so extreme handwaving:
vm X uses luns 1& 2
engine -> vdsm "pause vm X"
that's pausing the VM. live snapshot isn't supposed to do so.
engine -> libstoragemgmt "snapshot luns 1, 2 to luns 3, 4" engine -> vdsm "snapshot running state of X to Y" engine -> vdsm "unpause vm X"
if engine had any failure before this step, the VM will remain paused. i.e., we compromised the VM to take a live snapshot.
engine -> vdsm "change Y to use luns 3, 4"
?
-- Andy
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote:
On 06/22/2012 04:46 PM, Itamar Heim wrote:
On 06/23/2012 02:31 AM, Andy Grover wrote:
On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up?
It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun).
Is this usage model made difficult or impossible by the current software architecture?
what about live snapshots?
I'm not a virt guy, so extreme handwaving:
vm X uses luns 1& 2
engine -> vdsm "pause vm X"
that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
engine -> libstoragemgmt "snapshot luns 1, 2 to luns 3, 4" engine -> vdsm "snapshot running state of X to Y" engine -> vdsm "unpause vm X"
if engine had any failure before this step, the VM will remain paused. i.e., we compromised the VM to take a live snapshot.
engine -> vdsm "change Y to use luns 3, 4"
?
-- Andy
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote:
On 06/22/2012 04:46 PM, Itamar Heim wrote:
On 06/23/2012 02:31 AM, Andy Grover wrote:
On 06/18/2012 01:15 PM, Saggi Mizrahi wrote:
Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up?
It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun).
Is this usage model made difficult or impossible by the current software architecture?
what about live snapshots?
I'm not a virt guy, so extreme handwaving:
vm X uses luns 1& 2
engine -> vdsm "pause vm X"
that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
Thanks -- Regards -- Andy
----- Original Message -----
From: "Andy Grover" agrover@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote:
On 06/22/2012 04:46 PM, Itamar Heim wrote:
On 06/23/2012 02:31 AM, Andy Grover wrote:
On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: > Also, there is no mention on credentials in any part of the > process. > How does VDSM or the host get access to actually modify the > storage > array? Who holds the creds for that and how? How does the user > set > this up?
It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun).
Is this usage model made difficult or impossible by the current software architecture?
what about live snapshots?
I'm not a virt guy, so extreme handwaving:
vm X uses luns 1& 2
engine -> vdsm "pause vm X"
that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions?
Thanks -- Regards -- Andy _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
On 2012-6-25 10:10, Andrew Cathrow wrote:
----- Original Message -----
From: "Andy Grover" agrover@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote:
On 06/22/2012 04:46 PM, Itamar Heim wrote:
On 06/23/2012 02:31 AM, Andy Grover wrote: > On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: >> Also, there is no mention on credentials in any part of the >> process. >> How does VDSM or the host get access to actually modify the >> storage >> array? Who holds the creds for that and how? How does the user >> set >> this up? > It seems to me more natural to have the oVirt-engine use > libstoragemgmt > directly to allocate and export a volume on the storage array, > and > then > pass this info to the vdsm on the node creating the vm. This > answers > Saggi's question about creds -- vdsm never needs array > modification > creds, it only gets handed the params needed to connect and use > the > new > block device (ip, iqn, chap, lun). > > Is this usage model made difficult or impossible by the current > software > architecture? what about live snapshots?
I'm not a virt guy, so extreme handwaving:
vm X uses luns 1& 2
engine -> vdsm "pause vm X"
that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions?
I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting.
Thanks -- Regards -- Andy _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
On 06/25/2012 07:47 AM, Shu Ming wrote:
On 2012-6-25 10:10, Andrew Cathrow wrote:
----- Original Message -----
From: "Andy Grover" agrover@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote:
On 06/22/2012 04:46 PM, Itamar Heim wrote: > On 06/23/2012 02:31 AM, Andy Grover wrote: >> On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: >>> Also, there is no mention on credentials in any part of the >>> process. >>> How does VDSM or the host get access to actually modify the >>> storage >>> array? Who holds the creds for that and how? How does the user >>> set >>> this up? >> It seems to me more natural to have the oVirt-engine use >> libstoragemgmt >> directly to allocate and export a volume on the storage array, >> and >> then >> pass this info to the vdsm on the node creating the vm. This >> answers >> Saggi's question about creds -- vdsm never needs array >> modification >> creds, it only gets handed the params needed to connect and use >> the >> new >> block device (ip, iqn, chap, lun). >> >> Is this usage model made difficult or impossible by the current >> software >> architecture? > what about live snapshots? I'm not a virt guy, so extreme handwaving:
vm X uses luns 1& 2
engine -> vdsm "pause vm X"
that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions?
I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting.
I kind of agree. If snapshot is being done at the array level, then the array takes care of quiesing the I/O, taking the snapshot and allowing the I/O, why does VDSM have to worry about anything here, it should all happen transparently for VDSM, isnt it ?
On 2012-6-25 22:14, Deepak C Shetty wrote:
On 06/25/2012 07:47 AM, Shu Ming wrote:
On 2012-6-25 10:10, Andrew Cathrow wrote:
----- Original Message -----
From: "Andy Grover" agrover@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote: > On 06/22/2012 04:46 PM, Itamar Heim wrote: >> On 06/23/2012 02:31 AM, Andy Grover wrote: >>> On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: >>>> Also, there is no mention on credentials in any part of the >>>> process. >>>> How does VDSM or the host get access to actually modify the >>>> storage >>>> array? Who holds the creds for that and how? How does the user >>>> set >>>> this up? >>> It seems to me more natural to have the oVirt-engine use >>> libstoragemgmt >>> directly to allocate and export a volume on the storage array, >>> and >>> then >>> pass this info to the vdsm on the node creating the vm. This >>> answers >>> Saggi's question about creds -- vdsm never needs array >>> modification >>> creds, it only gets handed the params needed to connect and use >>> the >>> new >>> block device (ip, iqn, chap, lun). >>> >>> Is this usage model made difficult or impossible by the current >>> software >>> architecture? >> what about live snapshots? > I'm not a virt guy, so extreme handwaving: > > vm X uses luns 1& 2 > > engine -> vdsm "pause vm X" that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions?
I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting.
I kind of agree. If snapshot is being done at the array level, then the array takes care of quiesing the I/O, taking the snapshot and allowing the I/O, why does VDSM have to worry about anything here, it should all happen transparently for VDSM, isnt it ?
The only issue is the quiesing may time out the VDSM io functions if it takes a non-trivial time. Not sure if VDSM can handle all the time out gracefully.
On 06/25/2012 09:14 AM, Deepak C Shetty wrote:
On 06/25/2012 07:47 AM, Shu Ming wrote:
I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting.
I kind of agree. If snapshot is being done at the array level, then the array takes care of quiesing the I/O, taking the snapshot and allowing the I/O, why does VDSM have to worry about anything here, it should all happen transparently for VDSM, isnt it ?
The array can take a snapshot in flight, but the data may be in an inconsistent state. Only the end application/user of the storage knows when a point in time is consistent. Typically the application(s) are quiesced, the OS buffers flushed (outstanding tagged IO is allowed to complete) and then the storage is told to make a point in time copy. This is the only way to be sure of what you have on disk is coherent.
A transactional database (two-phase commit) and logging file systems (meta data) are specifically written to handle these inconsistencies, but many applications are not.
Regards, Tony
On 06/25/2012 08:38 PM, Tony Asleson wrote:
On 06/25/2012 09:14 AM, Deepak C Shetty wrote:
On 06/25/2012 07:47 AM, Shu Ming wrote:
I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting.
I kind of agree. If snapshot is being done at the array level, then the array takes care of quiesing the I/O, taking the snapshot and allowing the I/O, why does VDSM have to worry about anything here, it should all happen transparently for VDSM, isnt it ?
The array can take a snapshot in flight, but the data may be in an inconsistent state. Only the end application/user of the storage knows when a point in time is consistent. Typically the application(s) are quiesced, the OS buffers flushed (outstanding tagged IO is allowed to complete) and then the storage is told to make a point in time copy. This is the only way to be sure of what you have on disk is coherent.
A transactional database (two-phase commit) and logging file systems (meta data) are specifically written to handle these inconsistencies, but many applications are not.
Thanks for clarifying Tony. So that means we need to do whatever from VDSM to quiese the I/O and then VDSM should instruct the array to take the snapshot.
On 06/25/2012 10:14 AM, Deepak C Shetty wrote:
On 06/25/2012 07:47 AM, Shu Ming wrote:
On 2012-6-25 10:10, Andrew Cathrow wrote:
----- Original Message -----
From: "Andy Grover" agrover@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote: > On 06/22/2012 04:46 PM, Itamar Heim wrote: >> On 06/23/2012 02:31 AM, Andy Grover wrote: >>> On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: >>>> Also, there is no mention on credentials in any part of the >>>> process. >>>> How does VDSM or the host get access to actually modify the >>>> storage >>>> array? Who holds the creds for that and how? How does the user >>>> set >>>> this up? >>> It seems to me more natural to have the oVirt-engine use >>> libstoragemgmt >>> directly to allocate and export a volume on the storage array, >>> and >>> then >>> pass this info to the vdsm on the node creating the vm. This >>> answers >>> Saggi's question about creds -- vdsm never needs array >>> modification >>> creds, it only gets handed the params needed to connect and use >>> the >>> new >>> block device (ip, iqn, chap, lun). >>> >>> Is this usage model made difficult or impossible by the current >>> software >>> architecture? >> what about live snapshots? > I'm not a virt guy, so extreme handwaving: > > vm X uses luns 1& 2 > > engine -> vdsm "pause vm X" that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions?
I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting.
I kind of agree. If snapshot is being done at the array level, then the array takes care of quiesing the I/O, taking the snapshot and allowing the I/O, why does VDSM have to worry about anything here, it should all happen transparently for VDSM, isnt it ?
I may be misssing something, but afaiu you need to ask the guest to perform the quiesce, and i'm sure the storage array can't do that.
On 06/25/2012 11:13 PM, Itamar Heim wrote:
On 06/25/2012 10:14 AM, Deepak C Shetty wrote:
On 06/25/2012 07:47 AM, Shu Ming wrote:
On 2012-6-25 10:10, Andrew Cathrow wrote:
----- Original Message -----
From: "Andy Grover" agrover@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote: > On 06/23/2012 03:09 AM, Andy Grover wrote: >> On 06/22/2012 04:46 PM, Itamar Heim wrote: >>> On 06/23/2012 02:31 AM, Andy Grover wrote: >>>> On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: >>>>> Also, there is no mention on credentials in any part of the >>>>> process. >>>>> How does VDSM or the host get access to actually modify the >>>>> storage >>>>> array? Who holds the creds for that and how? How does the user >>>>> set >>>>> this up? >>>> It seems to me more natural to have the oVirt-engine use >>>> libstoragemgmt >>>> directly to allocate and export a volume on the storage array, >>>> and >>>> then >>>> pass this info to the vdsm on the node creating the vm. This >>>> answers >>>> Saggi's question about creds -- vdsm never needs array >>>> modification >>>> creds, it only gets handed the params needed to connect and use >>>> the >>>> new >>>> block device (ip, iqn, chap, lun). >>>> >>>> Is this usage model made difficult or impossible by the current >>>> software >>>> architecture? >>> what about live snapshots? >> I'm not a virt guy, so extreme handwaving: >> >> vm X uses luns 1& 2 >> >> engine -> vdsm "pause vm X" > that's pausing the VM. live snapshot isn't supposed to do so. Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions?
I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting.
I kind of agree. If snapshot is being done at the array level, then the array takes care of quiesing the I/O, taking the snapshot and allowing the I/O, why does VDSM have to worry about anything here, it should all happen transparently for VDSM, isnt it ?
I may be misssing something, but afaiu you need to ask the guest to perform the quiesce, and i'm sure the storage array can't do that.
No, you are not, I missed it. After Tony & Shu Ming's reply, I realised that the guest has to quiese the I/O before VDSM can ask storage array to take the snapshot.
* Andrew Cathrow acathrow@redhat.com [2012-06-24 21:11]:
----- Original Message -----
From: "Andy Grover" agrover@redhat.com To: "Shu Ming" shuming@linux.vnet.ibm.com Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote:
On 06/22/2012 04:46 PM, Itamar Heim wrote:
On 06/23/2012 02:31 AM, Andy Grover wrote: > On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: >> Also, there is no mention on credentials in any part of the >> process. >> How does VDSM or the host get access to actually modify the >> storage >> array? Who holds the creds for that and how? How does the user >> set >> this up? > > It seems to me more natural to have the oVirt-engine use > libstoragemgmt > directly to allocate and export a volume on the storage array, > and > then > pass this info to the vdsm on the node creating the vm. This > answers > Saggi's question about creds -- vdsm never needs array > modification > creds, it only gets handed the params needed to connect and use > the > new > block device (ip, iqn, chap, lun). > > Is this usage model made difficult or impossible by the current > software > architecture?
what about live snapshots?
I'm not a virt guy, so extreme handwaving:
vm X uses luns 1& 2
engine -> vdsm "pause vm X"
that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions?
Absolutely. Requiring every management application (engine, etc) to integrate with libstoragemanagement is a win here. We want to simplify working with KVM, storage, etc not require every mgmt application to know deep details about how to create a live VM snapshot.
Thanks -- Regards -- Andy _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
On 06/25/2012 08:28 PM, Ryan Harper wrote:
- Andrew Cathrowacathrow@redhat.com [2012-06-24 21:11]:
----- Original Message -----
From: "Andy Grover"agrover@redhat.com To: "Shu Ming"shuming@linux.vnet.ibm.com Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote:
On 06/22/2012 04:46 PM, Itamar Heim wrote: > On 06/23/2012 02:31 AM, Andy Grover wrote: >> On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: >>> Also, there is no mention on credentials in any part of the >>> process. >>> How does VDSM or the host get access to actually modify the >>> storage >>> array? Who holds the creds for that and how? How does the user >>> set >>> this up? >> It seems to me more natural to have the oVirt-engine use >> libstoragemgmt >> directly to allocate and export a volume on the storage array, >> and >> then >> pass this info to the vdsm on the node creating the vm. This >> answers >> Saggi's question about creds -- vdsm never needs array >> modification >> creds, it only gets handed the params needed to connect and use >> the >> new >> block device (ip, iqn, chap, lun). >> >> Is this usage model made difficult or impossible by the current >> software >> architecture? > what about live snapshots? I'm not a virt guy, so extreme handwaving:
vm X uses luns 1& 2
engine -> vdsm "pause vm X"
that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions?
Absolutely. Requiring every management application (engine, etc) to integrate with libstoragemanagement is a win here. We want to simplify working with KVM, storage, etc not require every mgmt application to know deep details about how to create a live VM snapshot.
Sorry, but not clear to me. Are you saying engine-libstoragemgmt integration is a win here ? VDSM is the common factor here.. so integrating libstoragemgmt with VDSM helps anybody talkign with VDSM in future AFAIU.
* Deepak C Shetty deepakcs@linux.vnet.ibm.com [2012-06-25 10:14]:
On 06/25/2012 08:28 PM, Ryan Harper wrote:
- Andrew Cathrowacathrow@redhat.com [2012-06-24 21:11]:
----- Original Message -----
From: "Andy Grover"agrover@redhat.com To: "Shu Ming"shuming@linux.vnet.ibm.com Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/24/2012 07:28 AM, Shu Ming wrote:
On 2012-6-23 20:40, Itamar Heim wrote:
On 06/23/2012 03:09 AM, Andy Grover wrote: >On 06/22/2012 04:46 PM, Itamar Heim wrote: >>On 06/23/2012 02:31 AM, Andy Grover wrote: >>>On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: >>>>Also, there is no mention on credentials in any part of the >>>>process. >>>>How does VDSM or the host get access to actually modify the >>>>storage >>>>array? Who holds the creds for that and how? How does the user >>>>set >>>>this up? >>>It seems to me more natural to have the oVirt-engine use >>>libstoragemgmt >>>directly to allocate and export a volume on the storage array, >>>and >>>then >>>pass this info to the vdsm on the node creating the vm. This >>>answers >>>Saggi's question about creds -- vdsm never needs array >>>modification >>>creds, it only gets handed the params needed to connect and use >>>the >>>new >>>block device (ip, iqn, chap, lun). >>> >>>Is this usage model made difficult or impossible by the current >>>software >>>architecture? >>what about live snapshots? >I'm not a virt guy, so extreme handwaving: > >vm X uses luns 1& 2 > >engine -> vdsm "pause vm X" that's pausing the VM. live snapshot isn't supposed to do so.
Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM.
OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm?
Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration.
for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions?
Absolutely. Requiring every management application (engine, etc) to integrate with libstoragemanagement is a win here. We want to simplify working with KVM, storage, etc not require every mgmt application to know deep details about how to create a live VM snapshot.
Sorry, but not clear to me. Are you saying engine-libstoragemgmt integration is a win here ?
Sorry if I wasn't clear. To answer your question: No.
The mgmt app should *NOT* have to learn all of the ins and outs of the end-point storage and the management of it.
VDSM is the common factor here.. so integrating libstoragemgmt with VDSM helps anybody talkign with VDSM in future AFAIU.
Yes. 100% agree.
On 06/25/2012 08:17 AM, Ryan Harper wrote:
- Deepak C Shetty deepakcs@linux.vnet.ibm.com [2012-06-25 10:14]:
On 06/25/2012 08:28 PM, Ryan Harper wrote:
The mgmt app should *NOT* have to learn all of the ins and outs of the end-point storage and the management of it.
VDSM is the common factor here.. so integrating libstoragemgmt with VDSM helps anybody talkign with VDSM in future AFAIU.
Yes. 100% agree.
Thanks, this has helped me understand vdsm's role much better.
-- Andy
On 06/19/2012 01:45 AM, Saggi Mizrahi wrote:
First of all I'd like to suggest not using the LSM acronym as it can also mean live-storage-migration and maybe other things.
Sure, what do you suggest ? libSM ?
Secondly I would like to avoid talking about what needs to be changed in VDSM before we figure out what exactly we want to accomplish.
Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up?
Per my original discussion on this with Ayal, this is what he had suggested... "In addition, I'm assuming we will either need a new 'storage array' entity in engine to keep credentials, or, in case of storage array as storage domain, just keep this info as part of the domain at engine level."
Either we can have the libstoragemgmt cred stored in the engine as part of engine-setup or have the user input them as part of Storage Prov and user clicks on "remember cred" button, so engine saves it and passes it to VDSM as needed ? In any way, the cred should come from the user/admin, no other way correct ?
In the array as domain case. How are the luns being mapped to initiators. What about setting discovery credentials. In the array set up case. How will the hosts be represented in regards to credentials? How will the different schemes and capabilities in regard to authentication methods will be expressed.
Not clear on what the concern here is. Can you pls provide more clarity on the problem here ? Maybe providing some examples will help.
Rest of the comments inline
----- Original Message -----
From: "Deepak C Shetty"deepakcs@linux.vnet.ibm.com To: "VDSM Project Development"vdsm-devel@lists.fedorahosted.org Cc: libstoragemgmt-devel@lists.sourceforge.net, engine-devel@ovirt.org Sent: Wednesday, May 30, 2012 5:38:46 AM Subject: [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
Hello All,
I have a draft write-up on the VDSM-libstoragemgmt integration.
I wanted to run this thru' the mailing list(s) to help tune and crystallize it, before putting it on the ovirt wiki. I have run this once thru Ayal and Tony, so have some of their comments incorporated.
I still have few doubts/questions, which I have posted below with lines ending with '?'
Comments / Suggestions are welcome& appreciated.
thanx, deepak
[Ccing engine-devel and libstoragemgmt lists as this stuff is relevant to them too]
- Background:
VDSM provides high level API for node virtualization management. It acts in response to the requests sent by oVirt Engine, which uses VDSM to do all node virtualization related tasks, including but not limited to storage management.
libstoragemgmt aims to provide vendor agnostic API for managing external storage array. It should help system administrators utilizing open source solutions have a way to programmatically manage their storage hardware in a vendor neutral way. It also aims to facilitate management automation, ease of use and take advantage of storage vendor supported features which improve storage performance and space utilization.
Home Page: http://sourceforge.net/apps/trac/libstoragemgmt/
libstoragemgmt (LSM) today supports C and python plugins for talking to external storage array using SMI-S as well as native interfaces (eg: netapp plugin ) Plan is to grow the SMI-S interface as needed over time and add more vendor specific plugins for exploiting features not possible via SMI-S or have better alternatives than using SMI-S. For eg: Many of the copy offload features require to use vendor specific commands, which justifies the need for a vendor specific plugin.
Goals:
2a) Ability to plugin external storage array into oVirt/VDSM
virtualization stack, in a vendor neutral way.
2b) Ability to list features/capabilities and other statistical
info of the array
2c) Ability to utilize the storage array offload capabilities from
oVirt/VDSM.
- Details:
LSM will sit as a new repository engine in VDSM. VDSM Repository Engine WIP @ http://gerrit.ovirt.org/#change,192
Current plan is to have LSM co-exist with VDSM on the virtualization nodes.
*Note : 'storage' used below is generic. It can be a file/nfs-export for NAS targets and LUN/logical-drive for SAN targets.
VDSM can use LSM and do the following... - Provision storage - Consume storage
3.1) Provisioning Storage using LSM
Typically this will be done by a Storage administrator.
oVirt/VDSM should provide storage admin the - ability to list the different storage arrays along with their types (NAS/SAN), capabilities, free/used space. - ability to provision storage using any of the array capabilities (eg: thin provisioned lun or new NFS export ) - ability to manage the provisioned storage (eg: resize/delete storage)
Once the storage is provisioned by the storage admin, VDSM will have to refresh the host(s) for them to be able to see the newly provisioned storage.
[SM] What about the clustered case, The management or the mailbox will have to be involved. Pros\Cons? Is there a capability for the storage to announce a change in topology? Can libstoragemgmt consume it? Does it even make sense?
A change in storage topology can only happen via the storage admin provisioning new LUNs, so why not have 'refresh capability' as a verb in the 'storage array' entity which causes VDSM to refresh the hosts via getDeviceList as put in 3.1.1 below ? Refresh capability can be invoked manually by the admin or can be setup periodically to happen.
3.1.1) Potential flows:
Mgmt -> vdsm -> lsm: create LUN + LUN Mapping / Zoning / whatever is needed to make LUN available to list of hosts passed by mgmt Mgmt -> vdsm: getDeviceList (refreshes host and gets list of devices) Repeat above for all relevant hosts (depending on list passed earlier, mostly relevant when extending an existing VG) Mgmt -> use LUN in normal flows.
[SM] This is all a bit vague in my opinion, concrete cases might prove more beneficial.
Can you provide your point of view here ?
3.1.2) How oVirt Engine will know which LSM to use ?
Normally the way this works today is that user can choose the host to use (default today is SPM), however there are a few flows where mgmt will know which host to use:
- extend storage domain (add LUN to existing VG) - Use SPM and make
sure *all* hosts that need access to this SD can see the new LUN 2. attach new LUN to a VM which is pinned to a specific host - use this host 3. attach new LUN to a VM which is not pinned - use a host from the cluster the VM belongs to and make sure all nodes in cluster can see the new LUN
Flows for which there is no clear candidate (Maybe we can use the SPM host itself which is the default ?)
- create a new disk without attaching it to any VM
- create a LUN for a new storage domain
[SM] Maybe the engine should do the work? What about permission? Will all hosts have the credentials to mess with the storage? Will they be passed on a per call basis to prevent other users from having access to the storage?
VDSM will get the creds via the engine or from the domain metadata, whatever is chosen as the right way. Once it has the creds, what perm issues do you see ? VDSM just uses the creds to talk with the libstoragemgmt.
3.2) Consuming storage using LSM
Typically this will be done by a virtualization administrator
oVirt/VDSM should allow virtualization admin to - Create a new storage domain using the storage on the array. - Be able to specify whether VDSM should use the storage offload capability (default) or override it to use its own internal logic.
- VDSM potential changes:
4.1) How to represent a VM disk, 1 LUN = 1 VMdisk or 1 LV = 1 VMdisk
I find this hard to understand. Maybe a different notation?
Like what ?
In ant case there is an abstracted case ie. storage domain. And there is a direct case ie. user provision luns to be used by VDSM and others as well. The will both have different ways of representing the same underlying objects.
Do you see the storage repository engine fitting the bill here ? Does implementing a storage repo engine for each of the above case make sense here ?
Also, I think that credentials might be tricky to represent as different arrays use different schemes to allocated users\hosts to luns\targets.
Maybe Tony (from libstoragemgmt) can provide more insight here on how the creds can be passed to cover different scenarios. I am not very aware on the diff. types of schemes possible.
? which bring another question...1 array == 1 storage domain OR 1 LUN/nfs-export on the array == 1 storage domain ?
Pros& Cons of each...
1 array == 1 storage domain - Each new vmdisk (aka volume) will be a new lun/file on the array. - Easier to exploit offload capabilities, as they are available at the LUN/File granularity - Will there be any issues where there will be too many LUNs/Files ... any maxluns limit on linux hosts that we might hit ? -- VDSM has been tested with 1K LUNs and it worked fine - ayal - Storage array limitations on the number of LUNs can be a downside here. - Would it be ok to share the array for hosting another storage domain if need be ? -- Provided the existing domain is not utilising all of the free space -- We can create new LUNs and hand it over to anyone needed ? -- Changes needed in VDSM to work with raw LUNs, today it only has support for consuming LUNs via VG/LV.
1 LUN/nfs-export on the array == 1 storage domain - How to represent a new vmdisk (aka vdsm volume) if its a LUN provisioned using SAN target ? -- Will it be VG/LV as is done today for block domains ? -- If yes, then it will be difficult to exploit offload capabilities, as they are at LUN level, not at LV level. - Each new vmdisk will be a new file on the nfs-export, assuming offload capability is available at the file level, so this should work for NAS targets ? - Can use the storage array for hosting multiple storage domains. -- Provision one more LUN and use it for another storage domain if need be. - VDSM already supports this today, as part of block storage domains for LUNs case.
Note that we will allow user to do either one of the two options above, depending on need.
4.2) Storage domain metadata will also include the features/capabilities of the storage array as reported by LSM. - Capabilities (taken via LSM) will be stored in the domain metadata during storage domain create flow. - Need changes in oVirt engine as well ( see 'oVirt Engine potential changes' section below )
4.3) VDSM to poll LSM for array capabilities on a regular basis ? Per ayal: - If we have a 'storage array' entity in oVirt Engine (see 'oVirt Engine potential changes' section below ) then we can have a 'refresh capabilities' button/verb. - We can periodically query the storage array. - Query LSM before running operations (sounds redundant to me, but if it's cheap enough it could be simplest).
Probably need a combination of 1+2 (query at very low frequency -
1/hour or 1/day + refresh button)
oVirt Engine potential changes - as described by ayal :
- We will either need a new 'storage array' entity in engine to
keep credentials, or, in case of storage array as storage domain, just keep this info as part of the domain at engine level. - Have a 'storage array' entity in oVirt Engine to support 'refresh capabilities' as a button/verb. - When user during storage provisioning, selects a LUN exported from a storage array (via LSM), the oVirt Engine would know from then onwards that this LUN is being served via LSM. It would then be able to query the capabilities of the LUN and show it to the virt admin during storage consumption flow.
- Potential flows:
- Create snapshot flow -- VDSM will check the snapshot offload capability in the
domain metadata -- If available, and override is not configured, it will use LSM to offload LUN/File snapshot -- If override is configured or capability is not available, it will use its internal logic to create snapshot (qcow2).
- Copy/Clone vmdisk flow -- VDSM will check the copy offload capability in the domain
metadata -- If available, and override is not configured, it will use LSM to offload LUN/File copy -- If override is configured or capability is not available, it will use its internal logic to create snapshot (eg: dd cmd in case of LUN).
LSM potential changes:
- list features/capabilities of the array. Eg: copy offload,
thin
prov. etc. - list containers (aka pools) (present in LSM today) - Ability to list different types of arrays being managed, their capabilities and used/free space - Ability to create/list/delete/resize volumes ( LUN or exports, available in LSM as of today) - Get monitoring info with object (LUN/snapshot/volume) as optional parameter for specific info. eg: container/pool free/used space, raid type etc.
Need to make sure above info is listed in a coherent way across arrays (number of LUNs, raid type used? free/total per container/pool, per LUN?. Also need I/O statistics wherever possible.
Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel
Saggi Mizrahi píše v Po 18. 06. 2012 v 16:15 -0400:
First of all I'd like to suggest not using the LSM acronym as it can also mean live-storage-migration and maybe other things.
Linux Security Modules (base of SELinux and AppArmor) comes to my mind every time I see the acronym.
David
vdsm-devel@lists.fedorahosted.org