quickly developing / testing tdl's

Mike Orazi morazi at redhat.com
Wed Jan 16 15:56:04 UTC 2013


On 01/14/2013 09:15 PM, Mo Morsi wrote:
> On 01/10/2013 08:24 AM, Mo Morsi wrote:
>> On 12/07/2012 08:17 PM, Mo Morsi wrote:
>>> Aaron gave me a good idea yesterday. The script [1] which I whipped up
>>> to demo deltacloud on fedora-devel [2] could be used to very quickly
>>> test out templates on the cloud. (without having to wait for the long
>>> image building process which takes lots of time and bandwidth)
>>>
>>> The tool is pretty basic, it just sets up a cloud environment via
>>> deltacloud with some extra info added to the tdl (the added fields do
>>> not overlap w/ the existing tdl ones, so the templates can still be used
>>> with oz / imagefactory as is), and runs a ssh loop to process the tdl.
>>>
>>> The nice thing is I can simplify it even further very easily by removing
>>> that ssh loop and simply using oz to process the tdl on the running
>>> cloud instance. Essentially this would mean mycloud (the name can be
>>> changed) is the simplest unification of deltacloud and oz
>>>
>>> To prevent confusion, I propose we name these templates w/ the extended
>>> cloud data,  "extended tdls",  or etdl's for short.
>>>
>>> This would give deltacloud another use case and would make building and
>>> testing templates for various purposes very simple and easy. Thoughts?
>>>
>>>    -Mo
>>>
>>> [1] https://github.com/movitto/mycloud/blob/master/mycloud.rb
>>> [2] http://lists.fedoraproject.org/pipermail/devel/2012-November/173298.html
>>>
>> I've been talking w/ clalance over the last couple of days and it seems
>> like changing this to use Oz on the cloud instance isn't feasible due to
>> Oz's dependency on libvirt.
>>
>> Discussing this w/ him it seems the best way to move forward w/ this is
>> to proceed as originally planned and simply manually process as much as
>> we can from the TDL directly on an instance started w/ deltacloud.
>>
>> This will allow us to quickly build and test these TDLs with the goal of
>> growing the template repo as well as the community. After the user has a
>> rough approximation of the template, they can use the traditional
>> imagefactory / oz tooling to perform the final  build of the images. The
>> nice thing with this is the same templates can be used for image
>> creation as well as service/systems orchestration, a feature which I
>> have not seen in any other cloud framework.
>>
>> Going forward, I'm going to refactor the existing utility a bit to make
>> it more modular and clean (introducing a little bit of abstraction to
>> handle for rpm-based and deb-based systems) and then packaging it up
>> into a gem/rpm (and submitting them to their respective locations). I
>> would also like to write some preliminary tests and docs, though that
>> might not happen for a couple sprints.
>>
>>    -Mo
>
> As per Mike's request, I wrote up a story encapsulating the mycloud
> proposal. It can be found on the project wiki here:
>
> https://github.com/movitto/mycloud/wiki/MyCloud:-Overview
>
>    -Mo
>

Mo,

Thanks for writing the overview.  I'm thinking it might be a reasonable 
thing to move that to README.md so github renders it when folks look at 
the repo.

I apologize.  I meant to respond to this yesterday when I first read 
through it, but I was unable to devote an adequate amount of time to 
provide decent feedback.  Unfortunately, I can't seem to find the 
document after the repo rename.

 From memory, I will say I think it provided a good overview of the big 
problem you are trying to solve:  Provide a sane workflow to 
develop/test tdl while avoiding the overhead associated with spinning a 
new image for each change to the tdl.

Having said that, when I suggested writing up stories around mycloud 
(now tdl-tools), I was hoping you would flesh out the types of things 
that you touched on as next steps (and in the above email thread), so 
that they would ultimately be tracked in github issues and brought into 
a given sprint.

I think you have the kernel for a number of stories in the above email & 
iirc they were also in the Overview.  Refactoring to allow the code to 
handle rpm & deb based systems, various packaging efforts and things of 
that nature are all good candidates that could be polished up a bit as 
stories that would be fairly easy for everyone to roughly understand 
what is being proposed and the level of work involved.  A number of 
folks on the team have said while this doc[1] talks about stories in the 
XP sense they have found it helpful when creating issues to work on in 
upcoming sprints.  I have a slew of other resources around User Stories 
and I'd be happy to carve out to some time if you would like to work 
together to come up with a set of stories you and others could execute 
on in upcoming sprints.

Thanks,
Mike

[1] http://xp123.com/articles/invest-in-good-stories-and-smart-tasks/





More information about the aeolus-devel mailing list