#791195

Matt Wagner matt.wagner at redhat.com
Thu Feb 16 22:06:18 UTC 2012


This was intended to be a comment on
https://bugzilla.redhat.com/show_bug.cgi?id=791195 , but Bugzilla is
currently down. I started to look at this, but didn't get too far.

Here is the comment I tried to leave:

I started to take a look at this, but I'm mildly puzzled and I wonder if
the potential refactor can go deeper.

It looks like Taskomatic's "destroy_instance" doesn't actually have any
sort of retry logic, nor do I even see it happening in the background.

If I remove the retry logic, destroy_on_provider is left like this:

if <really long conditional>
         @task = self.queue_action(self.owner, 'destroy')
        raise I18n.t"instance.errors.cannot_destroy" unless @task
        Taskomatic.destroy_instance(@task)
end

queue_action just creates an Event and a Task object in the database and
returns the task.

We then call Taskomatic.destroy_instance, which updates some metadata on
@task and then calls "destroy!" on the instance via Deltacloud. If that
fails, we update the task and return.

If the 500 retries existed solely to guard against the task not existing
(!), then we can indeed drop it. But I rather assumed it was meant to
guard against API errors as well.

Re: #3, I'm not sure I understand. If we just moved it to after_update,
we'd delete the instance on the provider any time its state changed. I
think the current before_destroy hook is correct. (Though we do have an
instance_observer we could use.)





More information about the aeolus-devel mailing list