Resending a second time, the mailing list didn't like that I hadn't signed up for
it yet, and the links are wrong on the lsm wiki...
Option B would be really really rough. I have patches here:
https://github.com/HP-Scale-out-Storage/libstoragemgmt/tree/wip-all
They implement a bunch of really useful (to me and solutions I work with, at least)
features for physical disks, such as:
-support for exposure of scsi device node (some code already existed for this, but I now
expose the node via a method call)
-support for exposure of scsi generic node
-support for discovery of a SEP associated with a disk, and caching of the SEP's scsi
generic node (leveraged for features below)
-support for disk's sas address
-support for disk's port/bay/box information (some code already existed for this, but
I'm querying via a mechanism that doesn't involve hpssacli)
-support to enable/disable physical disk IDENT LEDs via sg_ses
-support to enable/disable physical disk FAULT LEDs via sg_ses
There are a few RAID mode features in there too:
-support for exposure of volume's scsi device node
-support for exposure of volume's scsi generic node
-support to enable/disable volume's IDENT LEDs via hpssacli
Just wanted to get that out there to make sure we don't duplicate work. If you
haven't started much implementation work yet, I'd prefer that you take a look at
my patches as they're submitted and we can rework the implementation if necessary. The
code I have here is already in use by a teammate of mine for a Lustre solution, and I will
be using this code within Ceph in the near future (that work hasn't started yet, but
is my next task after upstreaming everything you see in wip-all).
In general, I don't think you should do much at all to restrict or wrap around
physical disks. Frankly, knowing what I know about roadmaps and LSI/Avago featuresets
today, all of the code we might implement should just:
-check to see if physical disks are exposed to the host (lots of ways to do this, as you
can see in my patches)
-if they are, collect extra data
As you say, a user can very easily integrate a check at discovery time to figure out what
mode they're in. If they're in HBA mode, just look for physical devices. If in
RAID mode, just look for logical devices (unless looking to configure). If in some sort of
mixed mode (there is support in the wild today from at least LSI to expose physical disks
and logical volumes simultaneously), check both.
Thanks for pinging out about this!
Joe
-----Original Message-----
From: Gris Ge [mailto:fge@redhat.com]
Sent: Thursday, October 22, 2015 9:11 AM
To: libstoragemgmt-devel(a)lists.fedorahosted.org
Cc: Handzik, Joe
Subject: Supporting JBOD/HBA disks in libstoragemgmt.
Hi Team,
I would like to share the possible approaches on supporting JBOD/HBA
disks:
Background:
HP SmartArray:
Only configured to HBA mode, all RAIDed volume will be purged,
all physical disks are exposed to OS directly without
any configure. User has to convert back to RAID mode before
making any config changes to the card.
LSI MegaRAID:
JBOD mode only expose 'unconfigred good' disks to OS, existing
RAIDed volumes are still functional. In JBOD mode, user
can still convert JBOD disks back to hidden RAIDed disk.
Only in JBOD mode, can user expose disks directly to OS.
LibstorageMgmt:
* Explicitly documented Volume can only came from Pool.
* Assuming(not documented) only volumes is OS accessible.
* I am intend to introduce new plugin -- SES to support SCSI
enclosure and new methods like 'disk_locate_led_on()',
'lsm.Client.fans()', 'lsm.Client.sensors()' and etc for
JBOD disk system which Ceph is using.
Options:
A):
`lsm.System.mode == lsm.System.MODE_HBA` indicate besides
checking lsm.Volume, lsm.Disk might also OS accessible.
Add `lsm.Disk.STATUS_HBA` to indicate disks are exposed to OS
directly.
Pros:
Minimum code required for HBA/JBOD disks.
Cons:
API user needs to do extra check on lsm.System.mode
and call lsm.Client.disks() also to check OS
accessible volume/disk.
B):
Create psudo lsm.Pool and lsm.Volume for each HBA/JBOD disk.
Pros:
No workflow change required no API user side.
Cons:
Need extra code to create psudo pool/volume.
I personally prefer option A).
Please kindly let me know you ideas.
Thank you very much.
Best regards.
--
Gris Ge