Change in vdsm[ovirt-3.5-gluster]: gluster: add createBrick verb

dnarayan at redhat.com dnarayan at redhat.com
Thu Apr 16 10:14:10 UTC 2015


Hello Piotr Kliczewski, Timothy Asir, Bala.FA, Federico Simoncelli,

I'd like you to do a code review.  Please visit

    https://gerrit.ovirt.org/39926

to review the following change.

Change subject: gluster: add createBrick verb
......................................................................

gluster: add createBrick verb

This patch adds new verb createBrick which
creates pvs from the given or list of available devices
and create a thin lv out of it.

Change-Id: Ic47c4c56834deb457ae9d038f77bcf69c7b39ba5
Signed-off-by: Timothy Asir <tjeyasin at redhat.com>
Signed-off-by: Timothy Asir Jeyasingh <tjeyasin at redhat.com>
Reviewed-on: https://gerrit.ovirt.org/35498
Reviewed-by: Bala.FA <barumuga at redhat.com>
Reviewed-by: Piotr Kliczewski <piotr.kliczewski at gmail.com>
Reviewed-by: Federico Simoncelli <fsimonce at redhat.com>
Signed-off-by: Darshan N <dnarayan at redhat.com>
---
M client/vdsClientGluster.py
M vdsm.spec.in
M vdsm/gluster/Makefile.am
M vdsm/gluster/api.py
M vdsm/gluster/apiwrapper.py
M vdsm/gluster/exception.py
A vdsm/gluster/fstab.py
M vdsm/gluster/storagedev.py
M vdsm/rpc/vdsmapi-gluster-schema.json
9 files changed, 468 insertions(+), 1 deletion(-)


  git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/26/39926/1

diff --git a/client/vdsClientGluster.py b/client/vdsClientGluster.py
index b2596cf..6594c85 100644
--- a/client/vdsClientGluster.py
+++ b/client/vdsClientGluster.py
@@ -656,6 +656,24 @@
         pp.pprint(status)
         return status['status']['code'], status['status']['message']
 
+    def do_glusterCreateBrick(self, args):
+        params = self._eqSplit(args)
+        devList = params.get('devices', '').split(',')
+        brickName = params.get('brickName', '')
+        mountPoint = params.get('mountPoint', '')
+        fsType = params.get('fsType', '')
+        raidType = params.get('raidType', '')
+        raidParams = {}
+        if raidType:
+            raidParams['type'] = raidType.upper()
+            raidParams['stripeSize'] = int(params.get('stripeSize', 0))
+            raidParams['pdCount'] = int(params.get('pdCount', 0))
+
+        status = self.s.glusterCreateBrick(brickName, mountPoint,
+                                           devList, fsType, raidParams)
+        pp.pprint(status)
+        return status['status']['code'], status['status']['message']
+
 
 def getGlusterCmdDict(serv):
     return \
@@ -1112,5 +1130,23 @@
              serv.do_glusterVolumeSnapshotList,
              ('[volumeName=<volume_name>]',
               'snapshot list for given volume'
-              ))
+              )),
+         'glusterCreateBrick': (
+             serv.do_glusterCreateBrick,
+             ('brickName=<brick_name> mountPoint=<mountPoint> '
+              'devices=<device[,device, ...]> '
+              '[raidType=<raid_type>] [stripeSize=<stripe_size>] '
+              '[fsType=<fs_type>] [pdCount=<pd_count>] \n\n'
+              '<brick_name> is the name of the brick\n'
+              '<mountPoint> device mount point\n'
+              '<device[,device, ...]> is the list of device name(s)\n'
+              '<fsType> is the file system type of the brick \n'
+              '<raid_type> is the type of raid like 6 or 10 or 0\n'
+              '<stripe_size> is the stripe unit size\n'
+              '<pd_count> is the total number of physical '
+              'disks used in the raid\n'
+              '<raid_type>, <stripe_size> and <pd_count> '
+              'are the optional parameters\n',
+              'This will create a brick using given input devices'
+              )),
          }
diff --git a/vdsm.spec.in b/vdsm.spec.in
index 24099be..8783adb 100644
--- a/vdsm.spec.in
+++ b/vdsm.spec.in
@@ -114,6 +114,9 @@
 BuildRequires: python-ordereddict
 BuildRequires: python-simplejson >= 2.0.9
 %endif
+%if 0%{?rhel} > 6 || 0%{?fedora}
+BuildRequires: python-blivet >= 0.61.14
+%endif
 
 # Autotools BuildRequires
 %if 0%{?enable_autotools}
@@ -652,6 +655,9 @@
 Requires: %{name} = %{version}-%{release}
 Requires: glusterfs-server
 Requires: python-magic
+%if 0%{?rhel} > 6 || 0%{?fedora}
+Requires: python-blivet >= 0.61.14
+%endif
 
 %description gluster
 Gluster plugin enables VDSM to serve Gluster functionalities.
@@ -1569,6 +1575,7 @@
 %doc COPYING
 %{_datadir}/%{vdsm_name}/gluster/api.py*
 %{_datadir}/%{vdsm_name}/gluster/apiwrapper.py*
+%{_datadir}/%{vdsm_name}/gluster/fstab.py*
 %{_datadir}/%{vdsm_name}/rpc/vdsmapi-gluster-schema.json
 %{_datadir}/%{vdsm_name}/gluster/gfapi.py*
 %{_datadir}/%{vdsm_name}/gluster/hooks.py*
diff --git a/vdsm/gluster/Makefile.am b/vdsm/gluster/Makefile.am
index 96b8054..712bba5 100644
--- a/vdsm/gluster/Makefile.am
+++ b/vdsm/gluster/Makefile.am
@@ -32,6 +32,7 @@
 	apiwrapper.py \
 	cli.py \
 	exception.py \
+	fstab.py \
 	gfapi.py \
 	hooks.py \
 	services.py \
diff --git a/vdsm/gluster/api.py b/vdsm/gluster/api.py
index 8ba262f..505cad6 100644
--- a/vdsm/gluster/api.py
+++ b/vdsm/gluster/api.py
@@ -489,6 +489,16 @@
         status = self.svdsmProxy.glusterSnapshotInfo(volumeName)
         return {'snapshotList': status}
 
+    @exportAsVerb
+    def createBrick(self, name, mountPoint, devList, fsType=None,
+                    raidParams={}, options=None):
+        status = self.svdsmProxy.glusterCreateBrick(name,
+                                                    mountPoint,
+                                                    devList,
+                                                    fsType,
+                                                    raidParams)
+        return {'device': status}
+
 
 def getGlusterMethods(gluster):
     l = []
diff --git a/vdsm/gluster/apiwrapper.py b/vdsm/gluster/apiwrapper.py
index 4b274db..1a05d89 100644
--- a/vdsm/gluster/apiwrapper.py
+++ b/vdsm/gluster/apiwrapper.py
@@ -81,6 +81,11 @@
     def storageDevicesList(self, options=None):
         return self._gluster.storageDevicesList()
 
+    def createBrick(self, name, mountPoint, devList, fsType=None,
+                    raidParams={}):
+        return self._gluster.createBrick(name, mountPoint,
+                                         devList, fsType, raidParams)
+
 
 class GlusterService(GlusterApiBase):
     def __init__(self):
diff --git a/vdsm/gluster/exception.py b/vdsm/gluster/exception.py
index 50b6993..3c7ad8b 100644
--- a/vdsm/gluster/exception.py
+++ b/vdsm/gluster/exception.py
@@ -399,6 +399,74 @@
     message = "Host UUID not found"
 
 
+class GlusterHostStorageDeviceNotFoundException(GlusterHostException):
+    code = 4409
+
+    def __init__(self, deviceList):
+        self.message = "Device(s) %s not found" % deviceList
+
+
+class GlusterHostStorageDeviceInUseException(GlusterHostException):
+    code = 4410
+
+    def __init__(self, deviceList):
+        self.message = "Device(s) %s already in use" % deviceList
+
+
+class GlusterHostStorageDeviceMountFailedException(GlusterHostException):
+    code = 4411
+
+    def __init__(self, device, mountPoint, fsType, mountOpts):
+        self.message = "Failed to mount device %s on mount point %s using " \
+                       "fs-type %s with mount options %s" % (
+                           device, mountPoint, fsType, mountOpts)
+
+
+class GlusterHostStorageDeviceFsTabFoundException(GlusterHostException):
+    code = 4412
+
+    def __init__(self, device):
+        self.message = "fstab entry for device %s already exists" % device
+
+
+class GlusterHostStorageDevicePVCreateFailedException(GlusterHostException):
+    code = 4413
+
+    def __init__(self, device, alignment, rc=0, out=(), err=()):
+        self.rc = rc
+        self.out = out
+        self.err = err
+        self.message = "Failed to create LVM PV for device %s with " \
+                       "data alignment %s" % (device, alignment)
+
+
+class GlusterHostStorageDeviceLVConvertFailedException(GlusterHostException):
+    code = 4414
+
+    def __init__(self, device, alignment, rc=0, out=(), err=()):
+        self.rc = rc
+        self.out = out
+        self.err = err
+        self.message = "Failed to run lvconvert for device %s with " \
+                       "data alignment %s" % (device, alignment)
+
+
+class GlusterHostStorageDeviceLVChangeFailedException(GlusterHostException):
+    code = 4415
+
+    def __init__(self, poolName, rc=0, out=(), err=()):
+        self.rc = rc
+        self.out = out
+        self.err = err
+        self.message = "Failed to run lvchange for the thin pool: %s" % (
+            poolName)
+
+
+class GlusterHostStorageDeviceMakeDirsFailedException(GlusterHostException):
+    code = 4516
+    message = "Make directories failed"
+
+
 # Hook
 class GlusterHookException(GlusterException):
     code = 4500
diff --git a/vdsm/gluster/fstab.py b/vdsm/gluster/fstab.py
new file mode 100644
index 0000000..5a2b4c8
--- /dev/null
+++ b/vdsm/gluster/fstab.py
@@ -0,0 +1,77 @@
+#
+# Copyright 2015 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
+#
+# Refer to the README and COPYING files for full details of the license
+#
+
+import logging
+import os
+from collections import namedtuple
+
+import exception as ge
+from . import safeWrite
+
+
+log = logging.getLogger("Gluster")
+FstabRecord = namedtuple("FstabRecord", "device, mountPoint, fsType, "
+                         "mntOpts, fsDump, fsPass")
+
+
+class FsTab(object):
+    def __init__(self, fileName="/etc/fstab"):
+        self.fileName = fileName
+
+    def _list(self):
+        devList = []
+        with open(self.fileName, "r") as f:
+            for line in f:
+                line = line.strip()
+                if not (line == '' or line.startswith("#")):
+                    tokens = line.split()
+                    devList.append(FstabRecord(tokens[0], tokens[1],
+                                               tokens[2],
+                                               tokens[3].split(","),
+                                               int(tokens[4]),
+                                               int(tokens[5])))
+        return devList
+
+    def _getFsUuid(self, device):
+        for uuid in os.listdir("/dev/disk/by-uuid"):
+            if device == os.path.realpath("/dev/disk/by-uuid/%s" % uuid):
+                return uuid
+        return None
+
+    def _exists(self, device):
+        uuid = "UUID=%s" % (self._getFsUuid(device))
+        for dev in self._list():
+            if device == dev.device or uuid == dev.device:
+                return True
+        return False
+
+    def add(self, device, mountPoint, fsType,
+            mntOpts=['defaults'], fsDump=0, fsPass=0):
+        if self._exists(device):
+            raise ge.GlusterHostStorageDeviceFsTabFoundException(device)
+        uuid = self._getFsUuid(device)
+        if not uuid:
+            log.warn("UUID not found for device %s" % device)
+        content = open(self.fileName).read()
+        content += "%s%s\t%s\t%s\t%s\t%s\t%s\n" % (
+            '' if content.endswith('\n') else '\n',
+            "UUID=%s" % uuid if uuid else device,
+            mountPoint, fsType, ",".join(mntOpts), fsDump, fsPass)
+        safeWrite(self.fileName, content)
diff --git a/vdsm/gluster/storagedev.py b/vdsm/gluster/storagedev.py
index 1a90c20..a860179 100644
--- a/vdsm/gluster/storagedev.py
+++ b/vdsm/gluster/storagedev.py
@@ -18,8 +18,42 @@
 # Refer to the README and COPYING files for full details of the license
 #
 
+import errno
+import logging
+import os
+
 import blivet
+import blivet.formats
+import blivet.formats.fs
+import blivet.size
+from blivet.devices import LVMVolumeGroupDevice
+from blivet.devices import LVMThinPoolDevice
+from blivet.devices import LVMLogicalVolumeDevice
+from blivet.devices import LVMThinLogicalVolumeDevice
+
+import storage.lvm as lvm
+from vdsm import utils
+
+import fstab
+import exception as ge
 from . import makePublic
+
+
+log = logging.getLogger("Gluster")
+_lvconvertCommandPath = utils.CommandPath("lvconvert",
+                                          "/sbin/lvconvert",
+                                          "/usr/sbin/lvconvert",)
+_lvchangeCommandPath = utils.CommandPath("lvchange",
+                                         "/sbin/lvchange",
+                                         "/usr/sbin/lvchange",)
+
+# All size are in MiB unless otherwise specified
+DEFAULT_CHUNK_SIZE_KB = 256
+DEFAULT_METADATA_SIZE_KB = 16777216
+MIN_VG_SIZE = 1048576
+MIN_METADATA_PERCENT = 0.005
+DEFAULT_FS_TYPE = "xfs"
+DEFAULT_MOUNT_OPTIONS = "inode64,noatime"
 
 
 def _getDeviceDict(device, createBrick=False):
@@ -70,3 +104,191 @@
     blivetEnv = blivet.Blivet()
     blivetEnv.reset()
     return _parseDevices(blivetEnv.devices)
+
+
+ at makePublic
+def createBrick(brickName, mountPoint, devNameList, fsType=DEFAULT_FS_TYPE,
+                raidParams={}):
+    def _getDeviceList(devNameList):
+        return [blivetEnv.devicetree.getDeviceByName(devName.split("/")[-1])
+                for devName in devNameList]
+
+    def _makePartition(deviceList):
+        pvDeviceList = []
+        doPartitioning = False
+        for dev in deviceList:
+            if dev.type not in ['disk', 'dm-multipath']:
+                pvDeviceList.append(dev)
+            else:
+                blivetEnv.initializeDisk(dev)
+                part = blivetEnv.newPartition(fmt_type="lvmpv", grow=True,
+                                              parents=[dev])
+                blivetEnv.createDevice(part)
+                pvDeviceList.append(part)
+                doPartitioning = True
+
+        if doPartitioning:
+            blivet.partitioning.doPartitioning(blivetEnv)
+        return pvDeviceList
+
+    def _createPV(deviceList, alignment=0):
+        def _createAlignedPV(deviceList, alignment):
+            for dev in deviceList:
+                rc, out, err = lvm._createpv(
+                    [dev.path], metadataSize=0,
+                    options=('--dataalignment', '%sK' % alignment))
+                if rc:
+                    raise ge.GlusterHostStorageDevicePVCreateFailedException(
+                        dev.path, alignment, rc, out, err)
+
+            blivetEnv.reset()
+            return _getDeviceList([dev.name for dev in deviceList])
+
+        if alignment:
+            blivetEnv.doIt()
+            return _createAlignedPV(deviceList, alignment)
+
+        for dev in deviceList:
+            lvmpv = blivet.formats.getFormat("lvmpv", device=dev.path)
+            blivetEnv.formatDevice(dev, lvmpv)
+
+        blivet.partitioning.doPartitioning(blivetEnv)
+        return deviceList
+
+    def _createVG(vgName, deviceList, stripeSize=0):
+        if stripeSize:
+            vg = LVMVolumeGroupDevice(
+                vgName, peSize=blivet.size.Size('%s KiB' % stripeSize),
+                parents=deviceList)
+        else:
+            vg = LVMVolumeGroupDevice(vgName, parents=deviceList)
+
+        blivetEnv.createDevice(vg)
+        return vg
+
+    def _createThinPool(poolName, vg, alignment=0,
+                        poolMetaDataSize=0, poolDataSize=0):
+        if not alignment:
+            # bz#1180228: blivet doesn't handle percentage-based sizes properly
+            # Workaround: Till the bz gets fixed, we take only 99% size from vg
+            pool = LVMThinPoolDevice(poolName, parents=[vg],
+                                     size=(vg.size * 99 / 100),
+                                     grow=True)
+            blivetEnv.createDevice(pool)
+            return pool
+        else:
+            metaName = "meta-%s" % poolName
+            vgPoolName = "%s/%s" % (vg.name, poolName)
+            metaLv = LVMLogicalVolumeDevice(
+                metaName, parents=[vg],
+                size=blivet.size.Size('%d KiB' % poolMetaDataSize))
+            poolLv = LVMLogicalVolumeDevice(
+                poolName, parents=[vg],
+                size=blivet.size.Size('%d KiB' % poolDataSize))
+            blivetEnv.createDevice(metaLv)
+            blivetEnv.createDevice(poolLv)
+            blivetEnv.doIt()
+
+            # bz#1100514: LVM2 currently only supports physical extent sizes
+            # that are a power of 2. Till that support is available we need
+            # to use lvconvert to achive that.
+            # bz#1179826: blivet doesn't support lvconvert functionality.
+            # Workaround: Till the bz gets fixed, lvconvert command is used
+            rc, out, err = utils.execCmd([_lvconvertCommandPath.cmd,
+                                          '--chunksize', '%sK' % alignment,
+                                          '--thinpool', vgPoolName,
+                                          '--poolmetadata',
+                                          "%s/%s" % (vg.name, metaName),
+                                          '--poolmetadataspar', 'n', '-y'])
+
+            if rc:
+                raise ge.GlusterHostStorageDeviceLVConvertFailedException(
+                    vg.path, alignment, rc, out, err)
+
+            rc, out, err = utils.execCmd([_lvchangeCommandPath.cmd,
+                                          '--zero', 'n', vgPoolName])
+            if rc:
+                raise ge.GlusterHostStorageDeviceLVChangeFailedException(
+                    vgPoolName, rc, out, err)
+
+            blivetEnv.reset()
+            return blivetEnv.devicetree.getDeviceByName(poolLv.name)
+
+    vgName = "vg-" + brickName
+    poolName = "pool-" + brickName
+    alignment = 0
+    chunkSize = 0
+    poolDataSize = 0
+    count = 0
+    metaDataSize = DEFAULT_METADATA_SIZE_KB
+    if raidParams.get('type') == '6':
+        count = raidParams['pdCount'] - 2
+        alignment = raidParams['stripeSize'] * count
+        chunkSize = alignment
+    elif raidParams.get('type') == '10':
+        count = raidParams['pdCount'] / 2
+        alignment = raidParams['stripeSize'] * count
+        chunkSize = DEFAULT_CHUNK_SIZE_KB
+
+    blivetEnv = blivet.Blivet()
+    blivetEnv.reset()
+
+    deviceList = _getDeviceList(devNameList)
+
+    notFoundList = set(devNameList).difference(
+        set([dev.name for dev in deviceList]))
+    if notFoundList:
+        raise ge.GlusterHostStorageDeviceNotFoundException(notFoundList)
+
+    inUseList = set(devNameList).difference(set([not _canCreateBrick(
+        dev) or dev.name for dev in deviceList]))
+    if inUseList:
+        raise ge.GlusterHostStorageDeviceInUseException(inUseList)
+
+    pvDeviceList = _makePartition(deviceList)
+    pvDeviceList = _createPV(pvDeviceList, alignment)
+    vg = _createVG(vgName, pvDeviceList, raidParams.get('stripeSize', 0))
+
+    # The following calculation is based on the redhat storage performance doc
+    # http://docbuilder.usersys.redhat.com/22522
+    # /#chap-Configuring_Red_Hat_Storage_for_Enhancing_Performance
+
+    if alignment:
+        vgSizeKib = int(vg.size.convertTo(spec="KiB"))
+        if vg.size.convertTo(spec='MiB') < MIN_VG_SIZE:
+            metaDataSize = vgSizeKib * MIN_METADATA_PERCENT
+        poolDataSize = vgSizeKib - metaDataSize
+        metaDataSize = (metaDataSize - (metaDataSize % alignment))
+        poolDataSize = (poolDataSize - (poolDataSize % alignment))
+
+    pool = _createThinPool(poolName, vg, chunkSize, metaDataSize, poolDataSize)
+    thinlv = LVMThinLogicalVolumeDevice(brickName, parents=[pool],
+                                        size=pool.size, grow=True)
+    blivetEnv.createDevice(thinlv)
+    blivetEnv.doIt()
+
+    if fsType != DEFAULT_FS_TYPE:
+        log.error("fstype %s is currently unsupported" % fsType)
+        raise ge.GlusterHostStorageDeviceMkfsFailedException(
+            thinlv.path, alignment, raidParams.get('stripeSize', 0), fsType)
+
+    format = blivet.formats.getFormat(DEFAULT_FS_TYPE, device=thinlv.path)
+    if alignment:
+        format._defaultFormatOptions = [
+            "-f", "-K", "-i", "size=512",
+            "-d", "sw=%s,su=%sk" % (count, raidParams.get('stripeSize')),
+            "-n", "size=8192"]
+    blivetEnv.formatDevice(thinlv, format)
+    blivetEnv.doIt()
+
+    try:
+        os.makedirs(mountPoint)
+    except OSError as e:
+        if errno.EEXIST != e.errno:
+            errMsg = "[Errno %s] %s: '%s'" % (e.errno, e.strerror, e.filename)
+            raise ge.GlusterHostStorageDeviceMakeDirsFailedException(
+                err=[errMsg])
+    thinlv.format.setup(mountpoint=mountPoint)
+    blivetEnv.doIt()
+    fstab.FsTab().add(thinlv.path, mountPoint, DEFAULT_FS_TYPE)
+    return _getDeviceDict(thinlv)
diff --git a/vdsm/rpc/vdsmapi-gluster-schema.json b/vdsm/rpc/vdsmapi-gluster-schema.json
index 0d597ce..e909a00 100644
--- a/vdsm/rpc/vdsmapi-gluster-schema.json
+++ b/vdsm/rpc/vdsmapi-gluster-schema.json
@@ -1258,6 +1258,47 @@
 {'command': {'class': 'GlusterHost', 'name': 'storageDevicesList'},
  'returns': ['StorageDevicesList']}
 
+
+# @RaidDevice:
+#
+# Raid storage devices.
+#
+# @type :       Raid type
+#
+# @stripeSize:  Stripe size
+#
+# @diskCnt:     Disk count
+#
+# Since: 4.17.0
+##
+{'type': 'RaidDevice':
+ 'data': {'type': 'str', 'stripeSize': 'int', 'diskCnt': 'int'}}
+
+##
+# @GlusterHost.createBrick:
+#
+# Create a brick for the gluster volume
+#
+# @name:       Gluster brick name
+#
+# @mountPoint: Device mountPoint
+#
+# @devList:    List of devices to be used
+#
+# @fsType:     # optional file system type
+#
+# @raidParams: # optional dictonary contains raid details of a raid device
+#
+# Returns:
+# Success or failure
+#
+# Since: 4.17.0
+##
+{'command': {'class': 'GlusterHost', 'name': 'createBrick'},
+ 'data': {'name': 'str', 'mountPoint': 'str', 'devList': ['str'],
+          '*fsType': 'str', '*raidParams': 'RaidDevice'},
+ 'returns': 'bool'}
+
 ##
 # @GlusterVolumeStatsInfo:
 #


-- 
To view, visit https://gerrit.ovirt.org/39926
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic47c4c56834deb457ae9d038f77bcf69c7b39ba5
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5-gluster
Gerrit-Owner: Darshan N <dnarayan at redhat.com>
Gerrit-Reviewer: Bala.FA <barumuga at redhat.com>
Gerrit-Reviewer: Federico Simoncelli <fsimonce at redhat.com>
Gerrit-Reviewer: Piotr Kliczewski <piotr.kliczewski at gmail.com>
Gerrit-Reviewer: Timothy Asir <tjeyasin at redhat.com>


More information about the vdsm-patches mailing list