On 05/19/2010 04:39 AM, James Laska wrote:
On Tue, 2010-05-18 at 15:52 +0800, Li Ming wrote:
On 05/18/2010 04:42 AM, James Laska wrote:
Taking into account the information on the anaconda-devel-list thread, I've played around with the test matrix a bit. I've moved it back into your original .ods format, since it's too painful to manage on the wiki at this point.
Changes with this version include: * Explicitly listing each vmlinuz+initrd.img methods, stage2= methods and repo= methods * Removed other tests from the matrix that don't have an immediate impact on the above three decisions (we can add them back in) * Not listing things by test driver just yet (I'll add that detail later)
Are the supported test scenarios correctly listed so far?
You listed all test scenarios, if the test drivers support all these test scenarios, we can automate most of our manual install test cases.The list is pretty good this time.
Although did not list all tests, the remaining tests can be handled by kickstart file.
I believe the strength and largest complexity in this test suite will involve preparing and validating the test environment. I've left out the remaining tests that I didn't think play a *strong* role in determining the test environment. However, I may have missed some important test scenarios. Please feel free to call them out.
Some considerations not addressed in the matrix.
1. Networking - both as a command-line argument and a kickstart keyword * We'll eventually want to test the command-line arguments and kickstart values for expected results at some point (e.g. request static-ip, confirm you got static ip). I'm comfortable that the proposed model could be used to test these scenarios. * Networking is also implicit in some of the other values. For example, anything specifying remote stage2/updates.img/kickstart/packages will need an active network. In this case, we may want to test: A. single NIC, with DHCP B. single NIC, no DHCP C. multiple NIC's 2. Determining stage2 and repo selection from kickstart * I only specified command-line arguments stage2= and repo= in the matrix, but these values can also be gathered from the contents of the kickstart file. So we may also want tests that provide no boot arguments, and only kickstart values. This seems like a detail we can handle later. 3. Other kickstart commands - there are a *lot* of kickstart commands that have no impact on the install experience, and only modify the installed system. It may be worthwhile to detail the commands that don't affect install future. They'll need to be validated, but not a priority
What's your thought about the next step? We define the priorities for each tests in test scenarios and then select high priority to write test drivers to support them first?
Good question. What I hoped to identify with our matrices is that just about each method (stage2=, repo=, ks=, updates=) is supported in every boot method (aka test driver). LiveCD being the only notable exception.
= Conclusions =
The matrix makes it pretty clear to me that the test suite needs to support just about every test listed so far. This tells me that, for our sanity, and to support externally contributed tests, we'll need ...
1. A comprehensive library of shared/common code (expanding the existing virtguest.py and more) 2. Small, easy to instantiate, test code (ideally, 200 lines of code or less)
I list #1, because as noted, if each boot method needs to support all stage2= methods, I think we'd all prefer writing the code to support that just once.
I list #2, because if each test is 800 lines of code, chances are slim that we'll be able to maintain the code and expect external test contributions. This is also in line with the original project definition [1].
Any other conclusions to draw from the current matrix (see attached for updated version)?
= Next steps =
We define the priorities for each tests in test scenarios and then select high priority to write test drivers to support them first?
That sounds good. Given the current matrix, the first priority would be implementing test drivers that satisfy the listed defaults. That can be later extended to support the next priority tests. What do you think?
Also, if you have a strong desire to get started with writing code, the easier tests would be the iso_sanity tests. Those could probably be automated the fastest.
James,you must have seen that the attached matrix was put in: https://fedoraproject.org/wiki/Is_anaconda_broken_roadmap
Currently, we are on step 4. We can do some adjustment to step#2 and step#4 at this time if we find something is not appropriate.Then, we will go for step #5. Now,this document is pretty much clear for what we need and what we will do next.
Thanks Liam