Change in vdsm[ovirt-3.5]: monitor: return whether the reported status is actual

laravot at redhat.com laravot at redhat.com
Wed Mar 18 13:47:30 UTC 2015


Hello Adam Litke, Nir Soffer, Allon Mureinik,

I'd like you to do a code review.  Please visit

    https://gerrit.ovirt.org/38874

to review the following change.

Change subject: monitor: return whether the reported status is actual
......................................................................

monitor: return whether the reported status is actual

When the domain monitoring results are reported after the domain
monitor was started. The first monitoring run may haven't been yet
completed - which causes the returned status to be determined as OK
by the engine (leads to domain status change).

It was attempted to change the inital returned status in the past but it
broke the host activation flow in the engine (see change I8e0df) and
therefore the inital results can't be changed (backward comptability) -
therefore in this change i add a new info that indicates whether the
reported status is actual or not.

When providing the engine the information on whether the returned status
is actual or not, the engine can decide how to act upon the monitoring
result and ignore it if it's irrelevant.

Change-Id: I1fea518991a76ea0f9ff1ff5258afe95bca2f00d
Bug-Url: https://bugzilla.redhat.com/1183977
Signed-off-by: Liron Aravot <laravot at redhat.com>
Reviewed-on: https://gerrit.ovirt.org/37952
Reviewed-by: Nir Soffer <nsoffer at redhat.com>
Reviewed-by: Allon Mureinik <amureini at redhat.com>
Reviewed-by: Adam Litke <alitke at redhat.com>
---
M vdsm/rpc/vdsmapi-schema.json
M vdsm/storage/domainMonitor.py
M vdsm/storage/hsm.py
A vdsm/storage/monitor.py
4 files changed, 381 insertions(+), 9 deletions(-)


  git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/74/38874/1

diff --git a/vdsm/rpc/vdsmapi-schema.json b/vdsm/rpc/vdsmapi-schema.json
index 7a469e4..d39e5e4 100644
--- a/vdsm/rpc/vdsmapi-schema.json
+++ b/vdsm/rpc/vdsmapi-schema.json
@@ -1709,12 +1709,18 @@
 #              acquired and therefore if it's possible to run (sanlock)
 #              protected VMs
 #
+# @actual:     Indicates if the returned status is an actual monitoring result
+#              or initial result that means that the first monitor run was not
+#              yet completed
+#              (new in version 4.16.13)
+#
 # Since: 4.10.0
 # XXX: Add an enum for return codes and their meanings
 ##
 {'type': 'StorageDomainVitals',
  'data': {'code': 'int', 'delay': 'float', 'lastCheck': 'float',
-          'valid': 'bool', 'version': 'int', 'acquired': 'bool'}}
+          'valid': 'bool', 'version': 'int', 'acquired': 'bool',
+          'actual': 'bool'}}
 
 ##
 # @PathStats:
diff --git a/vdsm/storage/domainMonitor.py b/vdsm/storage/domainMonitor.py
index d9e515c..4d6b9d6 100644
--- a/vdsm/storage/domainMonitor.py
+++ b/vdsm/storage/domainMonitor.py
@@ -35,13 +35,14 @@
         "error", "checkTime", "valid", "readDelay", "masterMounted",
         "masterValid", "diskUtilization", "vgMdUtilization",
         "vgMdHasEnoughFreeSpace", "vgMdFreeBelowThreashold", "hasHostId",
-        "isoPrefix", "version",
+        "isoPrefix", "version", "actual",
     )
 
     def __init__(self):
-        self.clear()
+        self.clear(actual=False)
 
-    def clear(self):
+    def clear(self, actual):
+        self.actual = actual
         self.error = None
         self.checkTime = time()
         self.valid = True
@@ -152,7 +153,6 @@
         self.sdUUID = sdUUID
         self.hostId = hostId
         self.interval = interval
-        self.firstChange = True
         self.status = DomainMonitorStatus()
         self.nextStatus = DomainMonitorStatus()
         self.isIsoDomain = None
@@ -201,7 +201,7 @@
                                "%s", self.hostId, self.sdUUID, exc_info=True)
 
     def _monitorDomain(self):
-        self.nextStatus.clear()
+        self.nextStatus.clear(actual=True)
 
         if time() - self.lastRefresh > self.refreshTime:
             # Refreshing the domain object in order to pick up changes as,
@@ -270,8 +270,6 @@
                 self.log.warn("Could not emit domain state change event",
                               exc_info=True)
 
-        self.firstChange = False
-
         # An ISO domain can be shared by multiple pools
         if (not self.isIsoDomain and self.nextStatus.valid
                 and self.nextStatus.hasHostId is False):
@@ -285,4 +283,5 @@
         self.status.update(self.nextStatus)
 
     def _statusDidChange(self):
-        return self.firstChange or self.status.valid != self.nextStatus.valid
+        return (not self.status.actual or
+                self.status.valid != self.nextStatus.valid)
diff --git a/vdsm/storage/hsm.py b/vdsm/storage/hsm.py
index 2860882..579f5a6 100644
--- a/vdsm/storage/hsm.py
+++ b/vdsm/storage/hsm.py
@@ -3622,6 +3622,7 @@
                     'version': domStatus.version,
                     # domStatus.hasHostId can also be None
                     'acquired': domStatus.hasHostId is True,
+                    'actual': domStatus.actual
                 },
 
                 'disktotal': disktotal,
diff --git a/vdsm/storage/monitor.py b/vdsm/storage/monitor.py
new file mode 100644
index 0000000..196fc10
--- /dev/null
+++ b/vdsm/storage/monitor.py
@@ -0,0 +1,366 @@
+#
+# Copyright 2011 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+#
+# Refer to the README and COPYING files for full details of the license
+#
+
+import logging
+import threading
+import time
+import weakref
+
+from vdsm import utils
+from vdsm.config import config
+
+from . import clusterlock
+from . import misc
+from .sdc import sdCache
+
+
+class Status(object):
+    __slots__ = (
+        "error", "checkTime", "valid", "readDelay", "masterMounted",
+        "masterValid", "diskUtilization", "vgMdUtilization",
+        "vgMdHasEnoughFreeSpace", "vgMdFreeBelowThreashold", "hasHostId",
+        "isoPrefix", "version", "actual",
+    )
+
+    def __init__(self, actual=True):
+        self.actual = actual
+        self.error = None
+        self.checkTime = time.time()
+        self.valid = True
+        self.readDelay = 0
+        self.diskUtilization = (None, None)
+        self.masterMounted = False
+        self.masterValid = False
+        self.hasHostId = False
+        # FIXME : Exposing these breaks abstraction and is not
+        #         needed. Keep exposing for BC. Remove and use
+        #         warning mechanism.
+        self.vgMdUtilization = (0, 0)
+        self.vgMdHasEnoughFreeSpace = True
+        self.vgMdFreeBelowThreashold = True
+        # The iso prefix is computed asynchronously because in any
+        # synchronous operation (e.g.: connectStoragePool, getInfo)
+        # we cannot risk to stop and wait for the iso domain to
+        # report its prefix (it might be unreachable).
+        self.isoPrefix = None
+        self.version = -1
+
+
+class FrozenStatus(Status):
+
+    def __init__(self, other):
+        for name in other.__slots__:
+            value = getattr(other, name)
+            super(FrozenStatus, self).__setattr__(name, value)
+
+    def __setattr__(self, *args):
+        raise AssertionError('%s is readonly' % self)
+
+    __delattr__ = __setattr__
+
+
+class DomainMonitor(object):
+    log = logging.getLogger('Storage.Monitor')
+
+    def __init__(self, interval):
+        self._monitors = {}
+        self._interval = interval
+        self.onDomainStateChange = misc.Event(
+            "Storage.DomainMonitor.onDomainStateChange")
+
+    @property
+    def domains(self):
+        return self._monitors.keys()
+
+    @property
+    def poolDomains(self):
+        return [sdUUID for sdUUID, monitor in self._monitors.items()
+                if monitor.poolDomain]
+
+    def startMonitoring(self, sdUUID, hostId, poolDomain=True):
+        monitor = self._monitors.get(sdUUID)
+
+        if monitor is not None:
+            monitor.poolDomain |= poolDomain
+            return
+
+        self.log.info("Start monitoring %s", sdUUID)
+        monitor = MonitorThread(weakref.proxy(self), sdUUID, hostId,
+                                self._interval)
+        monitor.poolDomain = poolDomain
+        monitor.start()
+        # The domain should be added only after it succesfully started
+        self._monitors[sdUUID] = monitor
+
+    def stopMonitoring(self, sdUUIDs):
+        sdUUIDs = frozenset(sdUUIDs)
+        monitors = [monitor for monitor in self._monitors.values()
+                    if monitor.sdUUID in sdUUIDs]
+        self._stopMonitors(monitors)
+
+    def getDomainsStatus(self):
+        for sdUUID, monitor in self._monitors.items():
+            yield sdUUID, monitor.getStatus()
+
+    def getHostStatus(self, domains):
+        status = {}
+        for sdUUID, hostId in domains.iteritems():
+            try:
+                monitor = self._monitors[sdUUID]
+            except KeyError:
+                status[sdUUID] = clusterlock.HOST_STATUS_UNAVAILABLE
+            else:
+                status[sdUUID] = monitor.getHostStatus(hostId)
+        return status
+
+    def close(self):
+        self.log.info("Stop monitoring all domains")
+        self._stopMonitors(self._monitors.values())
+
+    def _stopMonitors(self, monitors):
+        # The domain monitor issues events that might become raceful if
+        # you don't wait until a monitor thread exit.
+        # Eg: when a domain is detached the domain monitor is stopped and
+        # the host id is released. If the monitor didn't actually exit it
+        # might respawn a new acquire host id.
+
+        # First stop monitor threads - this take no time, and make the process
+        # about 7 times faster when stopping 30 monitors.
+        for monitor in monitors:
+            self.log.info("Stop monitoring %s", monitor.sdUUID)
+            monitor.stop()
+
+        # Now wait for threads to finish - this takes about 10 seconds with 30
+        # monitors, most of the time spent waiting for sanlock.
+        for monitor in monitors:
+            self.log.debug("Waiting for monitor %s", monitor.sdUUID)
+            monitor.join()
+            try:
+                del self._monitors[monitor.sdUUID]
+            except KeyError:
+                self.log.warning("Montior for %s removed while stopping",
+                                 monitor.sdUUID)
+
+
+class MonitorThread(object):
+    log = logging.getLogger('Storage.Monitor')
+
+    def __init__(self, domainMonitor, sdUUID, hostId, interval):
+        self.thread = threading.Thread(target=self._run)
+        self.thread.setDaemon(True)
+        self.domainMonitor = domainMonitor
+        self.stopEvent = threading.Event()
+        self.domain = None
+        self.sdUUID = sdUUID
+        self.hostId = hostId
+        self.interval = interval
+        self.nextStatus = Status(actual=False)
+        self.status = FrozenStatus(self.nextStatus)
+        self.isIsoDomain = None
+        self.isoPrefix = None
+        self.lastRefresh = time.time()
+        self.refreshTime = \
+            config.getint("irs", "repo_stats_cache_refresh_timeout")
+
+    def start(self):
+        self.thread.start()
+
+    def stop(self):
+        self.stopEvent.set()
+
+    def join(self):
+        self.thread.join()
+
+    def getStatus(self):
+        return self.status
+
+    def getHostStatus(self, hostId):
+        if not self.domain:
+            return clusterlock.HOST_STATUS_UNAVAILABLE
+        return self.domain.getHostStatus(hostId)
+
+    def __canceled__(self):
+        """ Accessed by methods decorated with @util.cancelpoint """
+        return self.stopEvent.is_set()
+
+    @utils.traceback(on=log.name)
+    def _run(self):
+        self.log.debug("Domain monitor for %s started", self.sdUUID)
+        try:
+            self._monitorLoop()
+        finally:
+            self.log.debug("Domain monitor for %s stopped", self.sdUUID)
+            if self._shouldReleaseHostId():
+                self._releaseHostId()
+
+    def _monitorLoop(self):
+        while not self.stopEvent.is_set():
+            try:
+                self._monitorDomain()
+            except utils.Canceled:
+                self.log.debug("Domain monitor for %s canceled", self.sdUUID)
+                return
+            except:
+                self.log.exception("Domain monitor for %s failed", self.sdUUID)
+            self.stopEvent.wait(self.interval)
+
+    def _monitorDomain(self):
+        self.nextStatus = Status()
+
+        # Pick up changes in the domain, for example, domain upgrade.
+        if self._shouldRefreshDomain():
+            self._refreshDomain()
+
+        try:
+            # We should produce the domain inside the monitoring loop because
+            # it might take some time and we don't want to slow down the thread
+            # start (and anything else that relies on that as for example
+            # updateMonitoringThreads). It also might fail and we want keep
+            # trying until we succeed or the domain is deactivated.
+            if self.domain is None:
+                self._produceDomain()
+
+            # The isIsoDomain assignment is delayed because the isoPrefix
+            # discovery might fail (if the domain suddenly disappears) and we
+            # could risk to never try to set it again.
+            if self.isIsoDomain is None:
+                self._setIsoDomainInfo()
+
+            self._performDomainSelftest()
+            self._checkReadDelay()
+            self._collectStatistics()
+        except Exception as e:
+            self.log.exception("Error monitoring domain %s", self.sdUUID)
+            self.nextStatus.error = e
+
+        self.nextStatus.checkTime = time.time()
+        self.nextStatus.valid = (self.nextStatus.error is None)
+
+        if self._statusDidChange():
+            self._notifyStatusChanges()
+
+        if self._shouldAcquireHostId():
+            self._acquireHostId()
+
+        self.status = FrozenStatus(self.nextStatus)
+
+    # Notifiying status changes
+
+    def _statusDidChange(self):
+        return (not self.status.actual or
+                self.status.valid != self.nextStatus.valid)
+
+    @utils.cancelpoint
+    def _notifyStatusChanges(self):
+        self.log.info("Domain %s became %s", self.sdUUID,
+                      "VALID" if self.nextStatus.valid else "INVALID")
+        try:
+            self.domainMonitor.onDomainStateChange.emit(
+                self.sdUUID, self.nextStatus.valid)
+        except:
+            self.log.exception("Error notifying state change for domain %s",
+                               self.sdUUID)
+
+    # Refreshing domain
+
+    def _shouldRefreshDomain(self):
+        return time.time() - self.lastRefresh > self.refreshTime
+
+    @utils.cancelpoint
+    def _refreshDomain(self):
+        self.log.debug("Refreshing domain %s", self.sdUUID)
+        sdCache.manuallyRemoveDomain(self.sdUUID)
+        self.lastRefresh = time.time()
+
+    # Deferred initialization
+
+    @utils.cancelpoint
+    def _produceDomain(self):
+        self.log.debug("Producing domain %s", self.sdUUID)
+        self.domain = sdCache.produce(self.sdUUID)
+
+    @utils.cancelpoint
+    def _setIsoDomainInfo(self):
+        isIsoDomain = self.domain.isISO()
+        if isIsoDomain:
+            self.log.debug("Domain %s is an ISO domain", self.sdUUID)
+            self.isoPrefix = self.domain.getIsoDomainImagesDir()
+        self.isIsoDomain = isIsoDomain
+
+    # Collecting monitoring info
+
+    @utils.cancelpoint
+    def _performDomainSelftest(self):
+        # This may trigger a refresh of lvm cache. We have seen this taking up
+        # to 90 seconds on overloaded machines.
+        self.domain.selftest()
+
+    @utils.cancelpoint
+    def _checkReadDelay(self):
+        # This may block for long time if the storage server is not accessible.
+        # On overloaded machines we have seen this take up to 15 seconds.
+        self.nextStatus.readDelay = self.domain.getReadDelay()
+
+    def _collectStatistics(self):
+        stats = self.domain.getStats()
+        self.nextStatus.diskUtilization = (stats["disktotal"],
+                                           stats["diskfree"])
+
+        self.nextStatus.vgMdUtilization = (stats["mdasize"],
+                                           stats["mdafree"])
+
+        self.nextStatus.vgMdHasEnoughFreeSpace = stats["mdavalid"]
+        self.nextStatus.vgMdFreeBelowThreashold = stats["mdathreshold"]
+
+        masterStats = self.domain.validateMaster()
+        self.nextStatus.masterValid = masterStats['valid']
+        self.nextStatus.masterMounted = masterStats['mount']
+
+        self.nextStatus.hasHostId = self.domain.hasHostId(self.hostId)
+        self.nextStatus.isoPrefix = self.isoPrefix
+        self.nextStatus.version = self.domain.getVersion()
+
+    # Managing host id
+
+    def _shouldAcquireHostId(self):
+        # An ISO domain can be shared by multiple pools
+        return (not self.isIsoDomain and
+                self.nextStatus.valid and
+                self.nextStatus.hasHostId is False)
+
+    def _shouldReleaseHostId(self):
+        # If this is an ISO domain we didn't acquire the host id and releasing
+        # it is superfluous.
+        return self.domain and not self.isIsoDomain
+
+    @utils.cancelpoint
+    def _acquireHostId(self):
+        try:
+            self.domain.acquireHostId(self.hostId, async=True)
+        except:
+            self.log.exception("Error acquiring host id %s for domain %s",
+                               self.hostId, self.sdUUID)
+
+    def _releaseHostId(self):
+        try:
+            self.domain.releaseHostId(self.hostId, unused=True)
+        except:
+            self.log.exception("Error releasing host id %s for domain %s",
+                               self.hostId, self.sdUUID)


-- 
To view, visit https://gerrit.ovirt.org/38874
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I1fea518991a76ea0f9ff1ff5258afe95bca2f00d
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: ovirt-3.5
Gerrit-Owner: Liron Aravot <laravot at redhat.com>
Gerrit-Reviewer: Adam Litke <alitke at redhat.com>
Gerrit-Reviewer: Allon Mureinik <amureini at redhat.com>
Gerrit-Reviewer: Nir Soffer <nsoffer at redhat.com>


More information about the vdsm-patches mailing list