Change in vdsm[master]: v2v: Convert VM from external source to Data Domain

nsoffer at redhat.com nsoffer at redhat.com
Thu Mar 26 10:44:06 UTC 2015


Nir Soffer has posted comments on this change.

Change subject: v2v: Convert VM from external source to Data Domain
......................................................................


Patch Set 12:

(25 comments)

https://gerrit.ovirt.org/#/c/37509/12/vdsm/v2v.py
File vdsm/v2v.py:

Line 61: 
Line 62: 
Line 63: class STATUS:
Line 64:     INITIALIZING = 'initializing'
Line 65:     STARTING = 'starting'
> Do we really need to differentiate between "initializing" and "starting"? W
We don't
Line 66:     DONE = 'done'
Line 67:     ABORT = 'abort'
Line 68:     ERROR = 'error'
Line 69:     DISK_COPY = 'copy'


Line 133:                                                  passwd=password)
Line 134:     except libvirt.libvirtError as e:
Line 135:         logging.error('error connection to hypervisor: %r', e.message)
Line 136:         return {'status': {'code': errCode['V2VConnection']['status']['code'],
Line 137:                 'message': e.message}}
Not related, please remove this change.
Line 138: 
Line 139:     with closing(conn):
Line 140:         vms = []
Line 141:         for vm in conn.listAllDomains():


Line 168:     try:
Line 169:         job = _get_job(jobId)
Line 170:         _validate_job_done(job)
Line 171:         ovf = _read_ovf(jobId)
Line 172:     except V2VJobError as e:
We should log here the error, so we don't need to log every failure in the called functions.
Line 173:         return errCode[e.err_name]
Line 174:     return {'status': doneCode, 'ovf': ovf}
Line 175: 
Line 176: 


Line 178:     try:
Line 179:         job = _get_job(jobId)
Line 180:         _validate_job_finished(job)
Line 181:         _remove_job(jobId)
Line 182:     except V2VJobError as e:
Log the error.
Line 183:         return errCode[e.err_name]
Line 184:     return {'status': doneCode}
Line 185: 
Line 186: 


Line 188:     try:
Line 189:         job = _get_job(jobId)
Line 190:         job.abort()
Line 191:         _remove_job(jobId)
Line 192:     except V2VJobError as e:
Log the error
Line 193:         return errCode[e.err_name]
Line 194:     return {'status': doneCode}
Line 195: 
Line 196: 


Line 210: 
Line 211: def _get_job(id):
Line 212:     with _lock:
Line 213:         if id not in _jobs:
Line 214:             raise NoSuchJob()
Error message:

    raise NoSuchJob("No such job %r" % id)
Line 215:         return _jobs[id]
Line 216: 
Line 217: 
Line 218: def _add_job(id, job):


Line 217: 
Line 218: def _add_job(id, job):
Line 219:     with _lock:
Line 220:         if id in _jobs:
Line 221:             raise JobExists()
Error message:

    raise NoExists("job %r exists" % id)
Line 222:         _jobs[id] = job
Line 223: 
Line 224: 
Line 225: def _remove_job(id):


Line 224: 
Line 225: def _remove_job(id):
Line 226:     with _lock:
Line 227:         if id not in _jobs:
Line 228:             raise NoSuchJob()
Error message:

    raise NoSuchJob("No such job %r" % id)
Line 229:         del _jobs[id]
Line 230: 
Line 231: 
Line 232: def _validate_job_done(job):


Line 230: 
Line 231: 
Line 232: def _validate_job_done(job):
Line 233:     if job.status != STATUS.DONE:
Line 234:         raise JobNotDone()
Please use an error message:

    raise JobNotDone("Job %r is %s" % (job.id, job.status))
Line 235: 
Line 236: 
Line 237: def _validate_job_finished(job):
Line 238:     if job.status not in (STATUS.DONE, STATUS.ERROR, STATUS.ABORT):


Line 235: 
Line 236: 
Line 237: def _validate_job_finished(job):
Line 238:     if job.status not in (STATUS.DONE, STATUS.ERROR, STATUS.ABORT):
Line 239:         raise JobNotDone()
Please use an error message:

    raise JobNotDone("Job %r is %s" % (job.id, job.status))
Line 240: 
Line 241: 
Line 242: def _read_ovf(jobId):
Line 243:     file_name = os.path.join(_V2V_DIR, "%s.ovf" % jobId)


Line 248:         if e.errno == errno.ENOENT:
Line 249:             raise NoSuchOvf()
Line 250:         else:
Line 251:             logging.error('Error reading file "%r" error: %s message %s',
Line 252:                           file_name, e.errno, e.message)
We don't have to log here, logging should be done in the upper level, once for all errors.
Line 253:             raise
Line 254: 
Line 255: 
Line 256: class ImportVm(object):


Line 249:             raise NoSuchOvf()
Line 250:         else:
Line 251:             logging.error('Error reading file "%r" error: %s message %s',
Line 252:                           file_name, e.errno, e.message)
Line 253:             raise
Better switch the order (as I suggested in a previous version):

    if e.errno != errno.ENOENT:
        raise  # Bug
    raise NoSuchOvf("No such ovf %r" % path)

NoSuchOvf is expected error - for example, engine sending multiple requests after a disconnect, and should not cause a traceback.  Anything else is a bug in the code or problem in the environment, and would be better to fail loudly.
Line 254: 
Line 255: 
Line 256: class ImportVm(object):
Line 257:     ABORT_RETRIES = 3


Line 256: class ImportVm(object):
Line 257:     ABORT_RETRIES = 3
Line 258:     ABORT_DELAY = 10
Line 259: 
Line 260:     def __init__(self, uri, username, password, vmProperties, jobId, cif):
vmProperties -> vm_properties
Line 261:         self._uri = uri
Line 262:         self._username = username
Line 263:         self._password = password
Line 264:         self._vmProperties = vmProperties


Line 260:     def __init__(self, uri, username, password, vmProperties, jobId, cif):
Line 261:         self._uri = uri
Line 262:         self._username = username
Line 263:         self._password = password
Line 264:         self._vmProperties = vmProperties
self._vmProperties -> self._vm_properties
Line 265:         self.id = jobId
Line 266:         self._cif = cif
Line 267:         self._status = STATUS.INITIALIZING
Line 268:         self._disk_progress = 0


Line 295:         ''' progress is part of multiple disk_progress its
Line 296:             flat and not 100% accurate - each disk take its
Line 297:             portion ie if we have 2 disks the first will take
Line 298:             0-50 and the second 50-100
Line 299:         '''
Use standard docstring:

    """
    description...
    """
Line 300:         completed = (self._disk_count - 1) * 100
Line 301:         return (completed + self._disk_progress) / self._disk_count
Line 302: 
Line 303:     @traceback(msg="Error importing vm")


Line 338:         self._proc.blocking = True
Line 339:         self._watch_process_output()
Line 340: 
Line 341:         if self._proc.returncode != 0:
Line 342:             self._status = STATUS.ERROR
> ah, /me missed the following line. Just add the returncode to the raised ex
+1
Line 343:             raise V2VProcessError("Process failed: %s" %
Line 344:                                   self._proc.stderr.read(5120))
Line 345:         self._status = STATUS.DONE
Line 346: 


Line 409:             logging.error("Error killing virt-v2v (pid: %d)", self._proc.pid)
Line 410:             zombiereaper.autoReapPID(self._proc.pid)
Line 411: 
Line 412:     def _generate_disk_parameters(self):
Line 413:         images = []
images -> parameters
Line 414:         for disk in self._vmProperties['disks']:
Line 415:             if 'imageID' in disk:
Line 416:                 images.append('--vdsm-image-uuid')
Line 417:                 images.append(disk['imageID'])


Line 416:                 images.append('--vdsm-image-uuid')
Line 417:                 images.append(disk['imageID'])
Line 418:             if 'volumeID' in disk:
Line 419:                 images.append('--vdsm-vol-uuid')
Line 420:                 images.append(disk['volumeID'])
Is it possible that disk does not contain both 'imageID' and 'volumeID'?

How v2v is going to import the vm in this case?
Line 421:             return images
Line 422: 
Line 423:     def _prepare_volumes(self):
Line 424:         ''' method prepare the images and return storage domain mounted path


Line 422: 
Line 423:     def _prepare_volumes(self):
Line 424:         ''' method prepare the images and return storage domain mounted path
Line 425:             since all images are in the same domain we return arbitrary image
Line 426:             for the return path (res['path']) '''
This is non-standard docstring format. Please use the common format in vdsm code:

    def name():
        """
        description
        """
Line 427:         for disk in self._vmProperties['disks']:
Line 428:             res = self._cif.irs.prepareImage(self._vmProperties['domainID'],
Line 429:                                              self._vmProperties['poolID'],
Line 430:                                              disk['imageID'], disk['volumeID'])


Line 429:                                              self._vmProperties['poolID'],
Line 430:                                              disk['imageID'], disk['volumeID'])
Line 431:             drive = {'poolID': self._vmProperties['poolID'],
Line 432:                      'domainID': self._vmProperties['domainID'],
Line 433:                      'imageID': disk['imageID']}
We are copying values from _vmProperties twice for no reason. Lets create a drive dict and use it:

    drive = {'poolID': self._vmProperties['poolID'],
             'domainID': self._vmProperties['domainID'],
             'imageID': disk['imageID'],
             'volumeID': disk['volumeID']}

Note that we don't need to have 'volumeID' in drive, but keeping it will make better logs later if tearing down the volume fails.

    res = self._cif.irs.prepareImage(
        drive['domainID'], drive['poolID'], drive['imageID'], drive['volumeID'])
Line 434:             if res['status']['code']:
Line 435:                 self._status = STATUS.ERROR
Line 436:                 self._status_msg = 'Bad volume specification: %s' % drive
Line 437:                 raise VolumeError(drive)


Line 433:                      'imageID': disk['imageID']}
Line 434:             if res['status']['code']:
Line 435:                 self._status = STATUS.ERROR
Line 436:                 self._status_msg = 'Bad volume specification: %s' % drive
Line 437:                 raise VolumeError(drive)
Please eliminate all lines setting self._status and self._status_msg.

Create the exception with a good description of the error, and raise it. self._status and self._status_msg are set in _run exception handler.

    raise VolumeError('Bad volume specification: %s' % drive)

(And remove the now unneeded VolumeError.__str__)
Line 438:             self._preparedVolumes.append([drive])
Line 439: 
Line 440:         return self._extract_storage_path(self._preparedVolumes[0]['path'])
Line 441: 


Line 440:         return self._extract_storage_path(self._preparedVolumes[0]['path'])
Line 441: 
Line 442:     def _extract_storage_path(self, path):
Line 443:         ''' prepareImage returns /prefix/sdUUID/images/imgUUID/volUUID
Line 444:             we need storage domain absolute path so we go up 3 levels '''
Use standard docstring (see above)
Line 445:         path = os.path.split(path)
Line 446:         path = os.path.split(path[0])
Line 447:         return os.path.split(path[0])[0]
Line 448: 


Line 443:         ''' prepareImage returns /prefix/sdUUID/images/imgUUID/volUUID
Line 444:             we need storage domain absolute path so we go up 3 levels '''
Line 445:         path = os.path.split(path)
Line 446:         path = os.path.split(path[0])
Line 447:         return os.path.split(path[0])[0]
This works, but is more complicated than needed- why not:

    path.rsplit(os.sep, 3)[0]
Line 448: 
Line 449:     def _teardown_volumes(self):
Line 450:         for drive in self._preparedVolumes:
Line 451:             try:


Line 450:         for drive in self._preparedVolumes:
Line 451:             try:
Line 452:                 self._cif.teardownImage(drive['domainID'],
Line 453:                                         drive['poolID'],
Line 454:                                         drive['imageID'])
There is no such method. You mean:

    self._cif.irs.teardownImage(...)
Line 455:             except Exception as e:
Line 456:                 logging.error('Error teardownVolumePath: %s', e)
Line 457: 
Line 458: 


Line 452:                 self._cif.teardownImage(drive['domainID'],
Line 453:                                         drive['poolID'],
Line 454:                                         drive['imageID'])
Line 455:             except Exception as e:
Line 456:                 logging.error('Error teardownVolumePath: %s', e)
We don't call here teardownVolumePath.

Lets use nicer message:

    Error tearing down drive: %s
Line 457: 
Line 458: 
Line 459: class OutputParser(object):
Line 460:     COPY_DISK_RE = re.compile(r'.*(Copying disk (\d+)/(\d+)).*')


-- 
To view, visit https://gerrit.ovirt.org/37509
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I34bd86d5a87ea8c42113c4a732f87ddd4ceab9ea
Gerrit-PatchSet: 12
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Shahar Havivi <shavivi at redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken at redhat.com>
Gerrit-Reviewer: Federico Simoncelli <fsimonce at redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani at redhat.com>
Gerrit-Reviewer: Michal Skrivanek <michal.skrivanek at redhat.com>
Gerrit-Reviewer: Nir Soffer <nsoffer at redhat.com>
Gerrit-Reviewer: Piotr Kliczewski <piotr.kliczewski at gmail.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi at redhat.com>
Gerrit-Reviewer: Shahar Havivi <shavivi at redhat.com>
Gerrit-Reviewer: Yaniv Bronhaim <ybronhei at redhat.com>
Gerrit-Reviewer: automation at ovirt.org
Gerrit-Reviewer: oVirt Jenkins CI Server
Gerrit-HasComments: Yes


More information about the vdsm-patches mailing list