On 10/22/2015 09:10 AM, Gris Ge wrote:
Hi Team,
Hi Gris!
I would like to share the possible approaches on supporting JBOD/HBA
disks:
Background:
HP SmartArray:
Only configured to HBA mode, all RAIDed volume will be purged,
all physical disks are exposed to OS directly without
any configure. User has to convert back to RAID mode before
making any config changes to the card.
LSI MegaRAID:
JBOD mode only expose 'unconfigred good' disks to OS, existing
RAIDed volumes are still functional. In JBOD mode, user
can still convert JBOD disks back to hidden RAIDed disk.
In a mixed configuration what do things look like? There is a pool for
the volume(s) that are participating in the RAID, but not a pool for the
un-configured disks? What would the the lsm.System.mode be in this
case? lsm.System.MODE_ITS_COMPLICATED :-)
Only in JBOD mode, can user expose disks directly to OS.
LibstorageMgmt:
* Explicitly documented Volume can only came from Pool.
* Assuming(not documented) only volumes is OS accessible.
Unless a volume is masked & mapped it's not accessible for external
storage arrays, so volume doesn't automatically imply OS accessible.
It's really just the abstraction of a disk device which could be virtual
or physical.
* I am intend to introduce new plugin -- SES to support SCSI
enclosure and new methods like 'disk_locate_led_on()',
'lsm.Client.fans()', 'lsm.Client.sensors()' and etc for
JBOD disk system which Ceph is using.
Does this need to be a new plugin or can we add this functionality to
existing plugins? Users would like to be able to blink a disk, check
fans etc. on a NetApp, EMC and other external storage arrays just like a
user with a local storage via HBA too right? We may not be able to
support them at this time, but in the future we might.
If a user has an external storage array with no configured 'pool(s)' and
no volumes we would just have the disks. Today we don't have the ability
to take 1 or more disks and add them to a pool, we expect the user to do
that using the vendor tools. But today a user could query the disks and
get information about each of them or call method(s) which take a disk
as an argument, like blink correct?
As for fans/sensors as a single HBA could have multiple JBOD's attached
to it and each enclosure can have one or more fans & sensors etc. how do
we distinguish the fans/sensors in each JBOD vs. the system abstraction
which is the HBA?
Options:
A):
`lsm.System.mode == lsm.System.MODE_HBA` indicate besides
checking lsm.Volume, lsm.Disk might also OS accessible.
Add `lsm.Disk.STATUS_HBA` to indicate disks are exposed to OS
directly.
Pros:
Minimum code required for HBA/JBOD disks.
Cons:
API user needs to do extra check on lsm.System.mode
and call lsm.Client.disks() also to check OS
accessible volume/disk.
B):
Create psudo lsm.Pool and lsm.Volume for each HBA/JBOD disk.
Just to clarify, we would need to create 1 pseudo pool and N number of
pseudo volumes (1 for each disk) correct?
Pros:
No workflow change required no API user side.
Cons:
Need extra code to create psudo pool/volume.
I personally prefer option A).
Please kindly let me know you ideas.
Knowing nothing else but the pros/cons my preference would be to go with
B. We either complicate our implementation of the plugin or we push the
complexity to every client that uses the library. I would rather keep
the clients API simple and consistent so that they can use the same code
whether they are walking through a SAN or a local HBA. We shouldn't
trade plugin complexity for client complexity.
However, do we even need to worry about option A or B in the case of a
HBA with some disks? We already give them the ability to list the
disks, could we add a method which takes a disk as an argument to blink?
-Tony