Change in vdsm[master]: Change safelease APIs to match sanlock flow

Federico Simoncelli fsimonce at redhat.com
Wed Nov 9 10:09:11 UTC 2011


Federico Simoncelli has posted comments on this change.

Change subject: Change safelease APIs to match sanlock flow
......................................................................


Patch Set 5: (6 inline comments)

....................................................
File vdsm/storage/blockSD.py
Line 631:         self.log.info("META MAPPING: %s" % meta)
Line 632:         return meta
Line 633: 
Line 634:     def _getIdsFilePath(self):
Line 635:         lvm.activateLVs(self.sdUUID, [sd.IDS])
I assume that as for _getLeasesFilePath (below), commit ae5dc0b3 and bz712829 we also need to make sure that the lv is active here.
Line 636:         return lvm.lvPath(self.sdUUID, sd.IDS)
Line 637: 
Line 638:     def _getLeasesFilePath(self):
Line 639:         lvm.activateLVs(self.sdUUID, [sd.LEASES])


....................................................
File vdsm/storage/sp.py
Line 612: 
Line 613:         try:
Line 614:             # Seeing as we are just creating the pool then the host doesn't
Line 615:             # have an assigned Id for this pool.  When locking the domain we must use an Id
Line 616:             self.id = 250
Done
Line 617:             # Master domain is unattached and all changes to unattached domains
Line 618:             # must be performed under storage lock
Line 619:             msd = sdCache.produce(msdUUID)
Line 620:             msd.changeLeaseParams(safeLease)


Line 667:                 raise
Line 668:         finally:
Line 669:             # Releasing the host id on the attached domains
Line 670:             for sdUUID in attachedDomList:
Line 671:                 sdCache.produce(sdUUID).releaseHostId(self.id)
The host id must be acquired on each lockspace (=domain) to be able to acquire the volume locks there.
Line 672: 
Line 673:             msd.releaseClusterLock()
Line 674:             msd.releaseHostId(self.id)
Line 675:             self.id = None


Line 716:         # Rebuild whole Pool
Line 717:         self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
Line 718:         self.__createMailboxMonitor()
Line 719: 
Line 720:         # Acquire lock on each domain
As above, we need it on each lockspace (=domain).
Line 721:         for sdUUID in self.getDomains():
Line 722:             try:
Line 723:                 sdCache.produce(sdUUID).acquireHostId(self.id)
Line 724:             except:


Line 838:             futureMaster.changeLeaseParams(safeLease)
Line 839:             futureMaster.acquireClusterLock(self.id)
Line 840:             try:
Line 841:                 self.createMaster(poolName, futureMaster, masterVersion, safeLease)
Line 842:                 self.refresh(msdUUID=msdUUID, masterVersion=masterVersion)
I'll move the shared code to a new method _acquireTemporaryClusterLock.
Line 843: 
Line 844:                 # TBD: Run full attachSD?
Line 845:                 domains = self.getDomains()
Line 846:                 for sdUUID in domDict:


Line 1075:                 if sdUUID == self.masterDomain.sdUUID:
Line 1076:                     self.masterMigrate(sdUUID, msdUUID, masterVersion, __securityOverride=True)
Line 1077: 
Line 1078:                 # Remove pool info from domain metadata
Line 1079:                 dom.releaseHostId(self.id)
Done
Line 1080:                 dom.detach(self.spUUID)
Line 1081: 
Line 1082:                 # Remove domain from pool metadata
Line 1083:                 del domList[sdUUID]


--
To view, visit http://gerrit.usersys.redhat.com/954
To unsubscribe, visit http://gerrit.usersys.redhat.com/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I39d5410ad76b22b037e8cac7e667817fb9ef56f6
Gerrit-PatchSet: 5
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Federico Simoncelli <fsimonce at redhat.com>
Gerrit-Reviewer: Ayal Baron
Gerrit-Reviewer: Dan Kenigsberg <danken at redhat.com>
Gerrit-Reviewer: Federico Simoncelli <fsimonce at redhat.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi at redhat.com>


More information about the vdsm-patches mailing list