#791195

Tomáš Hrčka thrcka at redhat.com
Fri Feb 17 10:11:58 UTC 2012


On 02/17/2012 10:45 AM, Tomáš Hrčka wrote:
> On 02/17/2012 10:23 AM, Jan Provazník wrote:
>> On 02/16/2012 11:06 PM, Matt Wagner wrote:
>>> This was intended to be a comment on
>>> https://bugzilla.redhat.com/show_bug.cgi?id=791195 , but Bugzilla is
>>> currently down. I started to look at this, but didn't get too far.
>>>
>>> Here is the comment I tried to leave:
>>>
>>> I started to take a look at this, but I'm mildly puzzled and I 
>>> wonder if
>>> the potential refactor can go deeper.
>>>
>>> It looks like Taskomatic's "destroy_instance" doesn't actually have any
>>> sort of retry logic, nor do I even see it happening in the background.
>>>
>>> If I remove the retry logic, destroy_on_provider is left like this:
>>>
>>> if<really long conditional>
>>>           @task = self.queue_action(self.owner, 'destroy')
>>>          raise I18n.t"instance.errors.cannot_destroy" unless @task
>>>          Taskomatic.destroy_instance(@task)
>>> end
>>>
>>> queue_action just creates an Event and a Task object in the database 
>>> and
>>> returns the task.
>>>
>>> We then call Taskomatic.destroy_instance, which updates some 
>>> metadata on
>>> @task and then calls "destroy!" on the instance via Deltacloud. If that
>>> fails, we update the task and return.
>>>
>>> If the 500 retries existed solely to guard against the task not 
>>> existing
>>> (!), then we can indeed drop it. But I rather assumed it was meant to
>>> guard against API errors as well.
>>>
>>
>> The taskomatic's destroy_instance method is whole wrapped in a rescue 
>> block (except task.save! call in ensure block) so 500-retries block 
>> doesn't guard against API errors anyway (if it was the goal, then it 
>> should check task.state too).
>>
>>> Re: #3, I'm not sure I understand. If we just moved it to after_update,
>>> we'd delete the instance on the provider any time its state changed. I
>>> think the current before_destroy hook is correct. (Though we do have an
>>> instance_observer we could use.)
>>>
>>>
>>
>> Sorry, I described it wrong. If retries block is there because of 
>> doing busy-wait loop until api destroy call succeeds, it probably 
>> means that it tries to call destroy method too early when the 
>> instance is not ready to be destroyed yet - this could be because a 
>> check of instance state change in instance_observer is wrong and we 
>> trigger destroy action too early or there is a weird bug on 
>> dc-core/rhevm side and it doesn't allow destroy an instance even if 
>> it should. I hope it's first option and we have a wrong check of 
>> instance state in instance_observer.
>>
>> Tomas: do you remember what was the reason of retry block and how 
>> deletion of instances works on RHEVM?
> Actually it was ment to be just fail safe If something went wrong with 
> API or task itself.
> There was some changes in rhevm API and in deltacloud, with Michal we 
> found method in rhevm API to directly destroy instance on RHEVM (it 
> doesn't need to be stopped first).
> I am not sure if it is in DC 0.5.0 release. I will check and comment 
> later today.
Forget about it it was code for Vsphere not RHEVM.

The deleting instance in rhevm actually work like these.

If Instace is stopped using DC in RHEVM it goes to state powering down 
in DC is stopping in Conductor is this reflected as state pending.
To destroy RHEVM instance you have to get it to state stopped in DC by 
invoking stop action again. After a while instance is in state stopped 
in DC and Conductor
so you can call destroy through DC do delete it from RHEVM.
>>
>> Jan
>




More information about the aeolus-devel mailing list