Change in vdsm[master]: v2v: Convert VM from external source to Data Domain

shavivi at redhat.com shavivi at redhat.com
Thu Mar 12 11:39:04 UTC 2015


Shahar Havivi has posted comments on this change.

Change subject: v2v: Convert VM from external source to Data Domain
......................................................................


Patch Set 9:

(31 comments)

https://gerrit.ovirt.org/#/c/37509/9/vdsm/v2v.py
File vdsm/v2v.py:

Line 134: def abort_job(jobId):
Line 135:     global _jobs
Line 136:     with _lock:
Line 137:         if jobId not in _jobs:
Line 138:             return errCode['invalidV2VJob']
> This code is repeated:
I will do it in next version of this patch
Line 139:         t = _jobs[jobId]
Line 140:         t.abort_job()
Line 141:         del _jobs[jobId]
Line 142: 


Line 135:     global _jobs
Line 136:     with _lock:
Line 137:         if jobId not in _jobs:
Line 138:             return errCode['invalidV2VJob']
Line 139:         t = _jobs[jobId]
> Please rename "t" to "job"
Done
Line 140:         t.abort_job()
Line 141:         del _jobs[jobId]
Line 142: 
Line 143:     return {'status': doneCode}


Line 151:             ret[jobId] = {
Line 152:                 'status': jobValue.status,
Line 153:                 'status_msg': jobValue.status_msg,
Line 154:                 'progress': jobValue.progress
Line 155:             }
> You are holding the lock again more time than needed.
Done
Line 156:     return ret
Line 157: 
Line 158: 
Line 159: class ImportVm():


Line 167:     process (ie job) is via getVdsStats() with the fields progress and status.
Line 168:     progress is a number which represent percentage of a single disk copy,
Line 169:     status is a way to feedback information on the job (init, error etc)
Line 170:     when the job is done a call to get_converted_vm will return the VMs OVF and
Line 171:     delete the job.
> Nice -  but I think this should be in the module level, not this class.
Done
Line 172:     """
Line 173:     def __init__(self, uri, username, password, vmProperties, jobId, cif):
Line 174:         self._thread = threading.Thread(target=self._run)
Line 175:         self._thread.daemon = True


Line 178:         self._password = password
Line 179:         self._vmProperties = vmProperties
Line 180:         self._job_id = jobId
Line 181:         self._cif = cif
Line 182:         self._status = _STATUS_INITIALIZING
> The job is not really initializing anything in this state - right?
technically you right but there is no different until we start the copy so initializing is good for this phase.
Line 183:         self._progress = 0
Line 184:         self._abort = False
Line 185:         self._status_msg = ''
Line 186: 


Line 179:         self._vmProperties = vmProperties
Line 180:         self._job_id = jobId
Line 181:         self._cif = cif
Line 182:         self._status = _STATUS_INITIALIZING
Line 183:         self._progress = 0
> But we do not have this info - right?
Thats what I present for next patch
disk_progress and progress for total disks.
This will be estimate and not accurate since if we have two disk with different size will scale different.
Michal that is what you last suggested - but we may have a different problem here since inspecting the disks in the beginning may take some time and we are at zero progress.
We cannot scale it because we don't know how much time it will take and this is a blocking read() call
Line 184:         self._abort = False
Line 185:         self._status_msg = ''
Line 186: 
Line 187:     def start(self):


Line 180:         self._job_id = jobId
Line 181:         self._cif = cif
Line 182:         self._status = _STATUS_INITIALIZING
Line 183:         self._progress = 0
Line 184:         self._abort = False
> I would call this _aborted
Done
Line 185:         self._status_msg = ''
Line 186: 
Line 187:     def start(self):
Line 188:         self._thread.start()


Line 188:         self._thread.start()
Line 189: 
Line 190:     @property
Line 191:     def job_id(self):
Line 192:         return self._job_id
> Calling job.job_id() feels too repetitive. Does job have multiple ids? It s
Done
Line 193: 
Line 194:     @property
Line 195:     def status(self):
Line 196:         return self._status


Line 195:     def status(self):
Line 196:         return self._status
Line 197: 
Line 198:     @property
Line 199:     def status_msg(self):
> description?
Yes,
This is the description of the status,
ie 
status: error
status_msg: cannot log in to libvirt
Line 200:         return self._status_msg
Line 201: 
Line 202:     @property
Line 203:     def progress(self):


Line 224:         cmd.extend(self._generate_disk_parameters())
Line 225:         cmd.extend(['--password-file', self._pass_file, '--vdsm-vm-uuid',
Line 226:                     self._job_id, '--vdsm-ovf-output', _P_V2V_DIR,
Line 227:                     '--machine-readable', '-os', self._get_domain_path(),
Line 228:                     self._vmProperties['vmName']])
> Please use this format:
Done
Line 229: 
Line 230:         logging.info('import vm, (jobId %s) started, cmd: %r', self._job_id,
Line 231:                      cmd)
Line 232: 


Line 226:                     self._job_id, '--vdsm-ovf-output', _P_V2V_DIR,
Line 227:                     '--machine-readable', '-os', self._get_domain_path(),
Line 228:                     self._vmProperties['vmName']])
Line 229: 
Line 230:         logging.info('import vm, (jobId %s) started, cmd: %r', self._job_id,
> - import -> Import
Done
Line 231:                      cmd)
Line 232: 
Line 233:         proc = execCmd(cmd, sync=False, deathSignal=15,
Line 234:                        env={'LIBGUESTFS_BACKEND': 'direct'})


Line 233:         proc = execCmd(cmd, sync=False, deathSignal=15,
Line 234:                        env={'LIBGUESTFS_BACKEND': 'direct'})
Line 235: 
Line 236:         proc.blocking = True
Line 237:         self._handle_process_output(proc)
> You are using _handle for two different things - watching process output an
Done
Line 238: 
Line 239:         if self._abort:
Line 240:             if proc.returncode is None:
Line 241:                 proc.terminate()


Line 239:         if self._abort:
Line 240:             if proc.returncode is None:
Line 241:                 proc.terminate()
Line 242:             self._status = _STATUS_ABORT
Line 243:         elif proc.returncode != 0:
> Lets make this more consistent and set self._status here in all cases.
Done
Line 244:             self._handle_process_errors(proc)
Line 245:         else:
Line 246:             self._status = _STATUS_DONE
Line 247: 


Line 245:         else:
Line 246:             self._status = _STATUS_DONE
Line 247: 
Line 248:     def _handle_process_output(self, proc):
Line 249:         re_copy_disk = re.compile(r'.*(Copying disk \d+/\d+).*')
> Better move this to class constant.
Done
Line 250:         while not self._abort and proc.returncode is None:
Line 251:             line = proc.stdout.readline()
Line 252: 
Line 253:             if 'Copying disk' in line:


Line 258:                     self._progress = 0
Line 259:                     logging.info('convert status: %s', self._status_msg)
Line 260:                 else:
Line 261:                     self._abort_msg("error parsing 'Copying disk' section: %s"
Line 262:                                     % line)
> Raise an error here, and abort the process in _run.
Done
Line 263:                     break
Line 264:                 self._handle_progress(proc, line)
Line 265: 
Line 266:     def _handle_progress(self, proc, line):


Line 260:                 else:
Line 261:                     self._abort_msg("error parsing 'Copying disk' section: %s"
Line 262:                                     % line)
Line 263:                     break
Line 264:                 self._handle_progress(proc, line)
> Lets be more specific and call this _watch_copy_progress()
Done
Line 265: 
Line 266:     def _handle_progress(self, proc, line):
Line 267:         re_progress = re.compile(r'\s+\((\d+).*')
Line 268:         while not self._abort:


Line 262:                                     % line)
Line 263:                     break
Line 264:                 self._handle_progress(proc, line)
Line 265: 
Line 266:     def _handle_progress(self, proc, line):
> We don't need line here - you use it only for reporting errors which should
I will add unit test for the next version and you will be able to see the output
and you right I don't really need that since it the prev line
Line 267:         re_progress = re.compile(r'\s+\((\d+).*')
Line 268:         while not self._abort:
Line 269:             buf = proc.stdout.read(1)
Line 270:             while '\r' not in buf:


Line 265: 
Line 266:     def _handle_progress(self, proc, line):
Line 267:         re_progress = re.compile(r'\s+\((\d+).*')
Line 268:         while not self._abort:
Line 269:             buf = proc.stdout.read(1)
> You can start with empty buffer:
Done
Line 270:             while '\r' not in buf:
Line 271:                 buf += proc.stdout.read(1)
Line 272:             m = re_progress.match(buf)
Line 273:             if m is not None:


Line 273:             if m is not None:
Line 274:                 try:
Line 275:                     self._progress = int(m.groups()[0])
Line 276:                     if self._progress == 100:
Line 277:                         break
> This is little fragile - is it possible that we don't get this value
No I never encounter one that didn't make it to 100
Line 278:                 except ValueError:
Line 279:                     self._abort_msg('error converting progress: %r' % line)
Line 280:                     raise
Line 281:             else:


Line 279:                     self._abort_msg('error converting progress: %r' % line)
Line 280:                     raise
Line 281:             else:
Line 282:                 self._abort_msg('error parsing progress: %r' % line)
Line 283:                 raise
> Handle the error first and raise instead of setting the abort flag.
Done
Line 284: 
Line 285:     def _handle_process_errors(self, proc):
Line 286:         self._status = _STATUS_ERROR
Line 287:         error = proc.stderr.readlines()


Line 289:             msg = 'error process ended with errors: %s' % error
Line 290:             self._status_msg = msg
Line 291:             logging.error(msg)
Line 292: 
Line 293:     def _create_passwd_file(self):
> Can we pipe this into v2v instead of keeping plaintext passwords on disk du
yes there is an option for that - 
The problem is we need to interact with the process stdin twice since it ask for password once for vmware login and the second time for curl to copy the disk.
so decided to go with a password file
Line 294:         if not os.path.exists(_P_V2V_DIR):
Line 295:             try:
Line 296:                 os.mkdir(_P_V2V_DIR, 0700)
Line 297:             except OSError as e:


Line 290:             self._status_msg = msg
Line 291:             logging.error(msg)
Line 292: 
Line 293:     def _create_passwd_file(self):
Line 294:         if not os.path.exists(_P_V2V_DIR):
> No need to check if the directory exists, just create it and handle EEXIST.
Done
Line 295:             try:
Line 296:                 os.mkdir(_P_V2V_DIR, 0700)
Line 297:             except OSError as e:
Line 298:                 if e.errno != os.errno.EEXIST:


Line 292: 
Line 293:     def _create_passwd_file(self):
Line 294:         if not os.path.exists(_P_V2V_DIR):
Line 295:             try:
Line 296:                 os.mkdir(_P_V2V_DIR, 0700)
> Please use octal literal - 0o700
Done
Line 297:             except OSError as e:
Line 298:                 if e.errno != os.errno.EEXIST:
Line 299:                     raise
Line 300: 


Line 295:             try:
Line 296:                 os.mkdir(_P_V2V_DIR, 0700)
Line 297:             except OSError as e:
Line 298:                 if e.errno != os.errno.EEXIST:
Line 299:                     raise
> But if the directory exists, you must use check that it has correct permiss
Done
Line 300: 
Line 301:         self._pass_file = os.path.join(_P_V2V_DIR, "%s.tmp" % self._job_id)
Line 302:         f = os.fdopen(os.open(self._pass_file,
Line 303:                               os.O_WRONLY | os.O_CREAT, 0600), 'w')


Line 305:             f.write(self._password)
Line 306: 
Line 307:     def _delete_passwd_file(self):
Line 308:         if os.path.exists(self._pass_file):
Line 309:             os.remove(self._pass_file)
> This is racy, use:
Done
Line 310: 
Line 311:     def abort_job(self):
Line 312:         self._abort = True
Line 313:         self._status = _STATUS_ABORT


Line 307:     def _delete_passwd_file(self):
Line 308:         if os.path.exists(self._pass_file):
Line 309:             os.remove(self._pass_file)
Line 310: 
Line 311:     def abort_job(self):
> Why not call this abort()?
Done
Line 312:         self._abort = True
Line 313:         self._status = _STATUS_ABORT
Line 314:         logging.info('aborting job id: %r', self._job_id)
Line 315: 


Line 309:             os.remove(self._pass_file)
Line 310: 
Line 311:     def abort_job(self):
Line 312:         self._abort = True
Line 313:         self._status = _STATUS_ABORT
> Lets separate the setting self._status from signaling an abort.
terminate is better no need for abort status
Line 314:         logging.info('aborting job id: %r', self._job_id)
Line 315: 
Line 316:     def _abort_msg(self, msg):
Line 317:         self._status = _STATUS_ERROR


Line 316:     def _abort_msg(self, msg):
Line 317:         self._status = _STATUS_ERROR
Line 318:         self._status_msg = msg
Line 319:         logging.error(msg)
Line 320:         self._abort = True
> This looks strange - moving the error status but signaling an abort - I gue
Done
Line 321: 
Line 322:     def _generate_disk_parameters(self):
Line 323:         images = []
Line 324:         for disk in self._vmProperties['disks']:


Line 329:                 images.append('--vdsm-vol-uuid')
Line 330:                 images.append(disk['volumeID'])
Line 331:             return images
Line 332: 
Line 333:     def _get_domain_path(self):
> Why do you need to prepare multiple images for getting the domain path?
I need to prepare all the images - they all in the same path (I need to extract the domain path) so getting one of them is ok.
I will rename the method and check for errors in res
Line 334:         ''' method prepare the images and return storage domain mounted path
Line 335:             since all images are in the same domain we return arbitrary image
Line 336:             for the return path (res['path']) '''
Line 337:         for disk in self._vmProperties['disks']:


Line 344:     def _teardownVolumes(self):
Line 345:         try:
Line 346:             drive = {'device': 'disk',
Line 347:                      'poolID': self._vmProperties['poolID'],
Line 348:                      'domainID': self._vmProperties['domainID']}
> Do you expect that self._vmProperties do not have a poolID or domainID?
No its not needed to be in the for since its the same information so read once.
Line 349:             for disk in self._vmProperties['disks']:
Line 350:                 drive['imageID'] = disk['imageID']
Line 351:                 drive['volumeID'] = disk['volumeID']
Line 352:                 self._cif.teardownVolumePath(drive)


Line 350:                 drive['imageID'] = disk['imageID']
Line 351:                 drive['volumeID'] = disk['volumeID']
Line 352:                 self._cif.teardownVolumePath(drive)
Line 353:         except Exception as e:
Line 354:             logging.error('error while trying to teardownVolumePath: %r', e)
> You have multiple images to tear down, but this will leave prepared images 
fix in next version as well as prepareImage error handling
Line 355: 
Line 356: 
Line 357: def _mem_to_mib(size, unit):
Line 358:     lunit = unit.lower()


-- 
To view, visit https://gerrit.ovirt.org/37509
To unsubscribe, visit https://gerrit.ovirt.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I34bd86d5a87ea8c42113c4a732f87ddd4ceab9ea
Gerrit-PatchSet: 9
Gerrit-Project: vdsm
Gerrit-Branch: master
Gerrit-Owner: Shahar Havivi <shavivi at redhat.com>
Gerrit-Reviewer: Dan Kenigsberg <danken at redhat.com>
Gerrit-Reviewer: Federico Simoncelli <fsimonce at redhat.com>
Gerrit-Reviewer: Francesco Romani <fromani at redhat.com>
Gerrit-Reviewer: Michal Skrivanek <michal.skrivanek at redhat.com>
Gerrit-Reviewer: Nir Soffer <nsoffer at redhat.com>
Gerrit-Reviewer: Piotr Kliczewski <piotr.kliczewski at gmail.com>
Gerrit-Reviewer: Saggi Mizrahi <smizrahi at redhat.com>
Gerrit-Reviewer: Shahar Havivi <shavivi at redhat.com>
Gerrit-Reviewer: Yaniv Bronhaim <ybronhei at redhat.com>
Gerrit-Reviewer: automation at ovirt.org
Gerrit-Reviewer: oVirt Jenkins CI Server
Gerrit-HasComments: Yes


More information about the vdsm-patches mailing list