Change in vdsm[master]: v2v: Convert VM from external source to Data Domain

shavivi at redhat.com shavivi at redhat.com
Sun Mar 29 11:17:20 UTC 2015


Shahar Havivi has posted comments on this change.

Change subject: v2v: Convert VM from external source to Data Domain
......................................................................


Patch Set 13:

(13 comments)

https://gerrit.ovirt.org/#/c/37509/13/vdsm/v2v.py
File vdsm/v2v.py:

Line 61: 
Line 62: 
Line 63: class STATUS:
Line 64:     '''
Line 65:     STARTING: request granted and string the import process
> string -> starting
Done
Line 66:     COPYING_DISK: copying disk in progress
Line 67:     ABORTED: user initiated aborted
Line 68:     FAILED: error during import process
Line 69:     DONE: convert process successfully finished


Line 266:         self._uri = uri
Line 267:         self._username = username
Line 268:         self._password = password
Line 269:         self._vm_properties = vm_properties
Line 270:         self.id = job_id
> Why public? This should be readonly right? Keep in self._id an ad an id() p
Done
Line 271:         self._cif = cif
Line 272:         self._status = STATUS.STARTING
Line 273:         self._disk_progress = 0
Line 274:         self._disk_count = 1


Line 313:             self._import()
Line 314:         except Exception as ex:
Line 315:             self._status = STATUS.FAILED
Line 316:             self._status_msg = ex.message
Line 317:             self._abort()
> All this is correct only when we fail. If the process was aborted, and the 
Done
Line 318:             raise
Line 319:         finally:
Line 320:             self._delete_passwd_file()
Line 321:             self._teardown_volumes()


Line 332:                     _V2V_DIR,
Line 333:                     '--machine-readable',
Line 334:                     '-os',
Line 335:                     self._prepare_volumes(),
Line 336:                     self._vm_properties['vmName']])
> I would move this part, creating the command, to single helper - _create_co
Done
Line 337: 
Line 338:         logging.info('Import vm, (job_id %s) started, cmd: %s', self.id, cmd)
Line 339: 
Line 340:         self._proc = execCmd(cmd, sync=False, deathSignal=15,


Line 344:         self._watch_process_output()
Line 345: 
Line 346:         if self._proc.returncode != 0:
Line 347:             raise V2VProcessError("Process failed: %s" %
Line 348:                                   self._proc.stderr.read(1024))
> Add the process returncode  to the error message.
Done
Line 349:         self._status = STATUS.DONE
Line 350: 
Line 351:     def _watch_process_output(self):
Line 352:         parser = OutputParser()


Line 345: 
Line 346:         if self._proc.returncode != 0:
Line 347:             raise V2VProcessError("Process failed: %s" %
Line 348:                                   self._proc.stderr.read(1024))
Line 349:         self._status = STATUS.DONE
> We need here another logging.info reporting the successful conversion.
Done
Line 350: 
Line 351:     def _watch_process_output(self):
Line 352:         parser = OutputParser()
Line 353:         for event in parser.parse(self._proc.stdout):


Line 350: 
Line 351:     def _watch_process_output(self):
Line 352:         parser = OutputParser()
Line 353:         for event in parser.parse(self._proc.stdout):
Line 354:             if isinstance(event, ImportProgress):
> We need logging.info here when starting to copy new disk.
Done
Line 355:                 self._status = STATUS.COPYING_DISK
Line 356:                 self._disk_progress = 0
Line 357:                 self._current_disk = event.current_disk
Line 358:                 self._disk_count = event.disk_count


Line 356:                 self._disk_progress = 0
Line 357:                 self._current_disk = event.current_disk
Line 358:                 self._disk_count = event.disk_count
Line 359:                 self._status_msg = event.description
Line 360:             elif isinstance(event, DiskProgress):
> Maybe loggging.debug during progress? if not for every event, maybe every 1
Done
Line 361:                 self._disk_progress = event.progress
Line 362:             else:
Line 363:                 raise InvalidParsingEvent(event)
Line 364: 


Line 418:         for disk in self._vm_properties['disks']:
Line 419:             parameters.append('--vdsm-image-uuid')
Line 420:             parameters.append(disk['imageID'])
Line 421:             parameters.append('--vdsm-vol-uuid')
Line 422:             parameters.append(disk['volumeID'])
> If  imageID or volumeID is missing your request will fail with KeyError.
Done
Line 423:             return parameters
Line 424: 
Line 425:     def _prepare_volumes(self):
Line 426:         '''


Line 419:             parameters.append('--vdsm-image-uuid')
Line 420:             parameters.append(disk['imageID'])
Line 421:             parameters.append('--vdsm-vol-uuid')
Line 422:             parameters.append(disk['volumeID'])
Line 423:             return parameters
> This will process only the first disk - move the return out of th e loop.
Done
Line 424: 
Line 425:     def _prepare_volumes(self):
Line 426:         '''
Line 427:         method prepare the images and return storage domain mounted path


Line 430:         '''
Line 431:         for disk in self._vm_properties['disks']:
Line 432:             drive = {'poolID': self._vm_properties['poolID'],
Line 433:                      'domainID': self._vm_properties['domainID'],
Line 434:                      'imageID': disk['imageID']}
> You forgot to add volumeID to drive.
Done
Line 435:             res = self._cif.irs.prepareImage(drive['domainID'],
Line 436:                                              drive['poolID'],
Line 437:                                              drive['imageID'],
Line 438:                                              drive['volumeID'])


Line 437:                                              drive['imageID'],
Line 438:                                              drive['volumeID'])
Line 439:             if res['status']['code']:
Line 440:                 self._status_msg = 'Bad volume specification: %s' % drive
Line 441:                 raise VolumeError(drive)
> Why not (as suggested in previous version):
Done
Line 442:             self._prepared_volumes.append([drive])
Line 443: 
Line 444:         return self._extract_storage_path(self._prepared_volumes[0]['path'])
Line 445: 


Line 478:                         buf += stream.read(1)
Line 479:                     progress = int(self._parse_progress(buf))
Line 480:                     yield DiskProgress(progress)
Line 481:                     if progress == 100:
Line 482:                         break
> Please see comments on this part in the previous version.
Done
Line 483: 
Line 484:     def _parse_line(self, line):
Line 485:         m = self.COPY_DISK_RE.match(line)
Line 486:         if m is None:


-- 
To view, visit https://gerrit.ovirt.org/37509
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I34bd86d5a87ea8c42113c4a732f87ddd4ceab9ea
Gerrit-PatchSet: 13
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Shahar Havivi <shavivi at redhat.com>
Gerrit-Reviewer: Allon Mureinik <amureini at redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken at redhat.com>
Gerrit-Reviewer: Federico Simoncelli <fsimonce at redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani at redhat.com>
Gerrit-Reviewer: Michal Skrivanek <michal.skrivanek at redhat.com>
Gerrit-Reviewer: Nir Soffer <nsoffer at redhat.com>
Gerrit-Reviewer: Piotr Kliczewski <piotr.kliczewski at gmail.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi at redhat.com>
Gerrit-Reviewer: Shahar Havivi <shavivi at redhat.com>
Gerrit-Reviewer: Yaniv Bronhaim <ybronhei at redhat.com>
Gerrit-Reviewer: automation at ovirt.org
Gerrit-Reviewer: oVirt Jenkins CI Server
Gerrit-HasComments: Yes


More information about the vdsm-patches mailing list