Will 'openlmi-storage' follow the SNIA SMI-S standard?

Jan Safranek jsafrane at redhat.com
Fri Oct 25 11:30:33 UTC 2013


On 10/25/2013 12:31 PM, Gris Ge wrote:
> Hi Team,
> 
> I am coding SMI-S client plugin for libstoragemgmt.
> I tried openlmi-storage against it, and noticed openlmi-storage is not
> following SNIA SMI-S standard at all:
>  * CIM_StorageConfigurationService does not associated to any
>    CIM_ComputerSystem.

That looks like a bug, association to CIM_ComputerSystem is supported.
What version of openlmi-storage do you use?

>  * CIM_StorageVolume is not supported.

Well, as CIM defines StorageVolume and LogicalDisk, there is no way, how
to distinguis these on Linux. Any block device can be either exported
using iSCSI (=is StorageVolume) or can be formatted with a filesystem
(=is LogicalDisk) or both (!). Thus we've chosen to use StorageExtents
for everything. Later, if we add iSCSI target configuration, we might
expose StorageVolumes.


> 
> I am wondering:
>     will openlmi-storage follow the SNIA SMI-S standard?
> If so, these two could be a good start:
>  * DMTF DSP1033 Profile Registration

This one should be implemented in OpenLMI-0.6.0, fill a bug if there is
something wrong.

>  * SNIA 1.6rev4 Block Book, Array Profile

OpenLMI storage _currently_ focuses on configuration of local storage,
while SMI-S aims at remote SAN/NAS management, having only thin Book 6:
Host Elements about configuration of actual hosts.

We reuse lof ot SMI-S concepts to configure various Linux stuff on hosst
(LVM, MD RAID, ...), still in my opinion SMI-S does not fit well here.
(Almost) all SMI-S profiles refer to Block Service package, which is
kind of 'heart' of SMI-S and as I understand it, it is _not_ applicable
to Linux. Block Service package uses disks just as big chunks of blocks
added to Primordial pool, from which you can allocate other pools and
volumes/logical disks.

On Linux, we treat disks differently - in the vast majority of cases we
partition them. So there is no single pool of all disks from which you
can allocate other concrete pools, each disk is treated _individually_
using OpenLMI variant of Disk Parition Subprofile.

I wanted to have this disk - partitions - MD RAID/VG/filesystem/whatever
hierarchy _explicit_, I don't want to hide everything behind a
primordial pool and create partitions "automatically" as stuff gets
allocated from the pool.

In other words, we do not want to create SAN management software, we
want to create Linux management SW, with all Linux block devices clearly
visible and explicitly created by an application.


Going back to your question, eventually we might have API to export a
StorageVolume and it might resemble SMI-S but we're focusing on the
local storage for now. Actually, we're thinking about using
libstoragemgmt to import remote LUNs (i.e. to configure local iscsi
initiator). Configuration of iSCSI target on the manage system is much
further on the TODO list.

Jan

DISCLAIMER: I admit I am the only author of OpenLMI Storage CIM API.
I've read most of SMI-S, still it's possible that I got it completelly
wrong. Please correct me if so, maybe there is better way, how to
represent Linux storage using SMI-S.





More information about the openlmi-devel mailing list