Change in vdsm[master]: clientIF: add logs during the recovery

fromani at redhat.com fromani at redhat.com
Fri Jul 17 10:29:35 UTC 2015


Francesco Romani has uploaded a new change for review.

Change subject: clientIF: add logs during the recovery
......................................................................

clientIF: add logs during the recovery

The recovery flow is supposed to be sporadic, but it
is nevertheless very important.
The current flow is pretty opaque, and on big setups
it may take some time, leaving the admin to guess
what's going on.

This patch adds logs during the process, to make it
obvious what is happened, what is going on and what
is left to do.

Change-Id: I31dddf0a2bc760c5ad383ff6bfee9a72adc87c4f
Signed-off-by: Francesco Romani <fromani at redhat.com>
---
M vdsm/clientIF.py
1 file changed, 44 insertions(+), 20 deletions(-)


  git pull ssh://gerrit.ovirt.org:29418/vdsm refs/changes/70/43770/1

diff --git a/vdsm/clientIF.py b/vdsm/clientIF.py
index ef27874..0223cb6 100644
--- a/vdsm/clientIF.py
+++ b/vdsm/clientIF.py
@@ -463,36 +463,55 @@
                       caps.CpuTopology().cores())
             migration.SourceThread.setMaxOutgoingMigrations(mog)
 
-            # Recover
-            for v in getVDSMDomains():
+            # Recover stage 1: domains from libvirt
+            doms = getVDSMDomains()
+            for idx, v in enumerate(doms):
                 vmId = v.UUIDString()
-                if not self._recoverVm(vmId):
+                if self._recoverVm(vmId):
+                    self.log.info(
+                        'recovery: from libvirt got domain %d/%d: %s',
+                        idx, len(doms), vmId)
+                else:
                     # RH qemu proc without recovery
-                    self.log.info('loose qemu process with id: '
-                                  '%s found, killing it.', vmId)
+                    self.log.info(
+                        'recovery: loose qemu process with id: '
+                        '%s found, killing it.', vmId)
                     try:
                         v.destroy()
                     except libvirt.libvirtError:
-                        self.log.error('failed to kill loose qemu '
-                                       'process with id: %s',
-                                       vmId, exc_info=True)
+                        self.log.exception(
+                            'recovery: failed to kill loose qemu '
+                            'process with id: %s', vmId)
 
+            # Recover stage 2: domains from recovery files
             # we do this to safely handle VMs which disappeared
             # from the host while VDSM was down/restarting
             recVms = self._getVDSMVmsFromRecovery()
             if recVms:
-                self.log.warning('Found %i VMs from recovery files not'
+                self.log.warning('recovery: found %i VMs from data files not'
                                  ' reported by libvirt.'
                                  ' This should not happen!'
                                  ' Will try to recover them.', len(recVms))
-            for vmId in recVms:
-                if not self._recoverVm(vmId):
-                    self.log.warning('VM %s failed to recover from recovery'
-                                     ' file, reported as Down', vmId)
+            for idx, vmId in enumerate(recVms):
+                if self._recoverVm(vmId):
+                    self.log.info(
+                        'recovery: from data file domain %d/%d: %s',
+                        idx, len(recVms), vmId)
+                else:
+                    self.log.warning(
+                        'recovery: VM %s failed to recover from data'
+                        ' file, reported as Down', vmId)
 
-            while (self._enabled and
-                   vmstatus.WAIT_FOR_LAUNCH in [v.lastStatus for v in
-                                                self.vmContainer.values()]):
+            # recover stage 3: waiting for domains to go up
+            while self._enabled:
+                launching = sum(v.lastStatus == vmstatus.WAIT_FOR_LAUNCH
+                                for v in self.vmContainer.values()])
+                if not launching:
+                    break
+                else:
+                    self.log.info(
+                        'recovery: waiting for %d domains to go up',
+                        launching)
                 time.sleep(1)
             self._cleanOldFiles()
             self._recovery = False
@@ -503,20 +522,25 @@
             # volumes manipulations
             while self._enabled and self.vmContainer and \
                     not self.irs.getConnectedStoragePoolsList()['poollist']:
+                self.log.info('recovery: waiting for storage pool to go up')
                 time.sleep(5)
 
-            for vmId, vmObj in self.vmContainer.items():
+            vmObjects = self.vmContainer.values()
+            for idx, vmObj in enumerate(vmObjects):
                 # Let's recover as much VMs as possible
                 try:
                     # Do not prepare volumes when system goes down
                     if self._enabled:
+                        self.log.info(
+                            'recovery: preparing paths for vm %d/%d: %s',
+                            idx, len(vmObjects), vmObj.id)
                         vmObj.preparePaths(
                             vmObj.devSpecMapFromConf()[hwclass.DISK])
                 except:
-                    self.log.error("Vm %s recovery failed",
-                                   vmId, exc_info=True)
+                    self.log.exception(
+                        "recovery: failed for vm %s", vmObj.id)
         except:
-            self.log.error("Vm's recovery failed", exc_info=True)
+            self.log.exception("recovery: failed")
             raise
 
     def _getVDSMVmsFromRecovery(self):


-- 
To view, visit https://gerrit.ovirt.org/43770
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I31dddf0a2bc760c5ad383ff6bfee9a72adc87c4f
Gerrit-PatchSet: 1
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Francesco Romani <fromani at redhat.com>


More information about the vdsm-patches mailing list