Will 'openlmi-storage' follow the SNIA SMI-S standard?

Jan Safranek jsafrane at redhat.com
Thu Oct 31 08:04:08 UTC 2013


On 10/28/2013 03:51 AM, Gris Ge wrote:
> On Fri, Oct 25, 2013 at 01:30:33PM +0200, Jan Safranek wrote:
>> That looks like a bug, association to CIM_ComputerSystem is supported.
>> What version of openlmi-storage do you use?
> openlmi-storage-0.5.1-2.fc19.noarch.rpm
> 
> With openlmi-storage-0.6.0-2.elx, I got python calltrace. Will create a bug
> for it.
>>
>>>  * CIM_StorageVolume is not supported.
>>
>> Well, as CIM defines StorageVolume and LogicalDisk, there is no way, how
>> to distinguis these on Linux. Any block device can be either exported
>> using iSCSI (=is StorageVolume) or can be formatted with a filesystem
>> (=is LogicalDisk) or both (!). Thus we've chosen to use StorageExtents
>> for everything. Later, if we add iSCSI target configuration, we might
>> expose StorageVolumes.
>>
> Noted.
> 
> One of libstoragemgmt user are seeking a way to control Linux local storage
> using OpenLMI-storage via libstoragemgmt API.
> 
> Any diagram about LVM management (like SNIA used in SMI-S SPEC PDF) I could
> use to seek a possibility?

There is extensive documentation incl. diagrams at
http://drupal-openlmi.rhcloud.com/sites/default/files/doc/admin/openlmi-storage/latest/index.html

Simplified diagram (without any *Setting classes) of LVM on top of MD RAID:
http://drupal-openlmi.rhcloud.com/sites/default/files/doc/admin/openlmi-storage/latest/concept-devices.html

Full LVM:
http://drupal-openlmi.rhcloud.com/sites/default/files/doc/admin/openlmi-storage/latest/usage-lvm.html

Full MD:
http://drupal-openlmi.rhcloud.com/sites/default/files/doc/admin/openlmi-storage/latest/usage-raid.html

(and you can find partitioning + other stuff there too)

> 
> This is just my quick thoughts about linking Linux term to SNIA term:
> 
> MD:
> 
>     Treat MD as a StoragePool. Using CompositeExtent to represent the RAID
>     layout. Since /dev/mdX can create filesystem on. We can represent /dev/mdX
>     as StorageVolume and create it once StoragePool created.
>     The InstanceID could use the filename in folder '/dev/disk/by-id/'.
> 
>     The partition of /dev/mdX could be GenericDiskPartition.
> 
>     For CreateOrModifyStoragePool, 'InExtents' holding the StorageExtent which
>     could from StorageVolume, DiskDrive, or Even StoragePool. 'Goal' is
>     StorageSetting which defined the RAID Level.
> 
>     Both ReturnToStoragePool and DeleteStoragePool remove the whole MD.

We use variant of Extent Composition Profile with
StorageConfigurationService.CreateOrModifyElementFromElements(), I don't
think MD is a 'pool', it's just an CompositeStorageExtent. There can be
MD containers, which have pool characteristics and various MD arrays can
be allocated from them, but we don't implement them for now as I don't
think they are widely used.

And we also provide more Linux friendly
StorageConfigurationService.CreateOrModifyMDRAID()

> 
> SCSI Disks:
> 
>     /dev/sdx as DiskDrive. Each DiskDrive has a Primordial StorageExtent
>     associated representing its storage space.
>     We can follow 'Disk DriveLite Subprofile'.

We don't implement DriveLite Subprofile yet (on TODO list), but disks
are StorageExtents.

> 
> LVM:
> 
>     VG as StoragePool.
>     LV as StorageVolume.
> 
>     For CreateOrModifyStoragePool, 'InExtents' holding the StorageExtent which
>     could from StorageVolume, DiskDrive, or Even StoragePool.
>     Using 'Goal' for Mirror or Thin Provisioning settings.
>     PV will be automatically created and will be presenting as a
>     StorageExtent.

Yes, that's what we do. Apart from InPool argument - what kind of pool
would you expect here? Build VG on top of other VG?

> 
>     For CreateOrModifyElementFromStoragePool, like you already did. We create
>     LV -- StorageVolume.
> 
>> Going back to your question, eventually we might have API to export a
>> StorageVolume and it might resemble SMI-S but we're focusing on the
>> local storage for now. Actually, we're thinking about using
>> libstoragemgmt to import remote LUNs (i.e. to configure local iscsi
>> initiator). Configuration of iSCSI target on the manage system is much
>> further on the TODO list.
> With iSCSI target, Linux server would be a Storage Array.
> And OpenLMI-storage will be the SMI-S provider of Linux Array.
> That's a quite exciting feature.
> I have spent quite a lot time on fighting with EMC/IBM/etc SMI-S providers.
> Let me know If could be helpful.

Sure, that's excellent idea. However, I'd like to make the local storage
right first and then expose it.

On related note, is there any library/service/cmdline tool which can
configure iSCSI target, so we could expose local devices?

>>
>> Jan
>>
>> DISCLAIMER: I admit I am the only author of OpenLMI Storage CIM API.
>> I've read most of SMI-S, still it's possible that I got it completelly
>> wrong. Please correct me if so, maybe there is better way, how to
>> represent Linux storage using SMI-S.
>>
> Likewise. Please do correct me if I misunderstand the SNIA/DMTF SPEC files.
> 
> Thanks for detailed reply and maintaining this great project.
> 



More information about the openlmi-devel mailing list