Hi,
So far, we have almost solved all technical problems for auto install test. And a workable test driver has been put to git. What we are doing now is to list all possible tests which can meet each deliverable. As discussed with James, we should created these test drivers to map to test deliverables which we get from release engineering:
* URL install source available *url_install.py * DVD image(s) available *iso_sanity.py *dvd_install.py * CD image(s) available *iso_sanity.py *cd_install.py
For each test driver, we are trying to list all possible tests, I listed some in wiki, you can see at here:
https://fedoraproject.org/wiki/Is_anaconda_broken_proposal#Roadmap
On "step 3", I listed some possible tests,but I think I must have missed some, do you have some to supplement? For Possible dvd_install.py tests, I try to list all install tests, but for url_install.py and cd_install.py tests ,I only listed some default install,because cd_install.py tests are mainly test boot and swapping disc, url_install.py tests are mainly test remote boot and install. I am not sure whether list all possible tests for url_install.py and cd_install.py, what's your suggestion?
Thanks Liam
On Thu, 2010-05-06 at 13:39 +0800, Li Ming wrote:
Hi,
So far, we have almost solved all technical problems for auto
install test. And a workable test driver has been put to git. What we are doing now is to list all possible tests which can meet each deliverable. As discussed with James, we should created these test drivers to map to test deliverables which we get from release engineering:
* URL install source available *url_install.py * DVD image(s) available *iso_sanity.py *dvd_install.py * CD image(s) available *iso_sanity.py *cd_install.py
For each test driver, we are trying to list all possible tests, I listed some in wiki, you can see at here:
https://fedoraproject.org/wiki/Is_anaconda_broken_proposal#Roadmap
On "step 3", I listed some possible tests,but I think I must have missed some, do you have some to supplement? For Possible dvd_install.py tests, I try to list all install tests, but for url_install.py and cd_install.py tests ,I only listed some default install,because cd_install.py tests are mainly test boot and swapping disc, url_install.py tests are mainly test remote boot and install. I am not sure whether list all possible tests for url_install.py and cd_install.py, what's your suggestion?
Thanks Liam _______________________________________________ autoqa-devel mailing list autoqa-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/autoqa-devel
Wow, great job, Liam, very impressive.:) I have two questions about it: Boot.iso test is included in DVD image category, right? Will upgrade, recovery cases be available in the future?
Thanks, Hurry
On 05/06/2010 06:55 PM, He Rui wrote:
On Thu, 2010-05-06 at 13:39 +0800, Li Ming wrote:
Hi,
So far, we have almost solved all technical problems for auto
install test. And a workable test driver has been put to git. What we are doing now is to list all possible tests which can meet each deliverable. As discussed with James, we should created these test drivers to map to test deliverables which we get from release engineering:
* URL install source available *url_install.py * DVD image(s) available *iso_sanity.py *dvd_install.py * CD image(s) available *iso_sanity.py *cd_install.py
For each test driver, we are trying to list all possible tests, I listed some in wiki, you can see at here:
https://fedoraproject.org/wiki/Is_anaconda_broken_proposal#Roadmap
On "step 3", I listed some possible tests,but I think I must have missed some, do you have some to supplement? For Possible dvd_install.py tests, I try to list all install tests, but for url_install.py and cd_install.py tests ,I only listed some default install,because cd_install.py tests are mainly test boot and swapping disc, url_install.py tests are mainly test remote boot and install. I am not sure whether list all possible tests for url_install.py and cd_install.py, what's your suggestion?
Thanks Liam _______________________________________________ autoqa-devel mailing list autoqa-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/autoqa-devel
Wow, great job, Liam, very impressive.:) I have two questions about it: Boot.iso test is included in DVD image category, right? Will upgrade, recovery cases be available in the future?
Thanks Hurry, you pointed out what we missed. For upgrade, As I know kickstart can support it. only need to replace "install" with "upgrade" in kickstart file. But for recovery, I am not sure, what I can tell is if kickstart supports it, it would not be difficult to include these test cases in Auto install test. As James said, we should list all these possible tests out,then we set priorities to them to determine what cases to be available first.
Thanks Liam
On Thu, 2010-05-06 at 13:39 +0800, Li Ming wrote:
Hi,
So far, we have almost solved all technical problems for auto
install test. And a workable test driver has been put to git. What we are doing now is to list all possible tests which can meet each deliverable. As discussed with James, we should created these test drivers to map to test deliverables which we get from release engineering:
Nicely done Liam! I like clearly defining the methods by which we initiate testing. There are too many possibilities out there, it helps to pinpoint how it really happens.
- URL install source available *url_install.py
Since a URL install source also includes a "images/boot.iso" file, we probably will need to include: * bootiso_install.py * iso_sanity.py (sanitize the boot.iso)
- DVD image(s) available *iso_sanity.py *dvd_install.py
Howabout some other ISO-based tests ... like * hdiso_install.py (using the DVD iso) * nfsiso_install.py (again, using DVD iso)
- CD image(s) available *iso_sanity.py *cd_install.py
Same as above, but with CD's instead of DVD: * hdiso_install.py (using the DVD iso) * nfsiso_install.py (again, using DVD iso)
For each test driver, we are trying to list all possible tests, I listed some in wiki, you can see at here:
https://fedoraproject.org/wiki/Is_anaconda_broken_proposal#Roadmap
On "step 3", I listed some possible tests,but I think I must have missed some, do you have some to supplement? For Possible dvd_install.py tests, I try to list all install tests, but for url_install.py and cd_install.py tests ,I only listed some default install,because cd_install.py tests are mainly test boot and swapping disc, url_install.py tests are mainly test remote boot and install. I am not sure whether list all possible tests for url_install.py and cd_install.py, what's your suggestion?
Can we list as many as possible? It'll feel a bit tedious, but hopefully this exercise helps identify the patterns. Things that each test driver will need to do, and things that only 1 (or subset) test driver will need to do.
It might be helpful to experiment with listing this in a wiki table format? Or whatever method will help identify commonalities in the different test drivers. This won't answer the question, "how should we prioritize these tests?".
Test driver Test dvd_install.py cd_install.py hdiso_install.py ... ============+===============+==============+=================+====== DVD boot YES NO NO CD boot NO YES NO boot.iso boot NO NO NO PXEimage boot NO NO YES stage2=media YES YES NO stage2=http NO NO NO stage2=hd NO NO NO stage2= ... repo=cdrom: YES YES NO repo=http NO NO NO repo=hd NO NO YES repo=nfs NO NO NO repo=nfsiso NO NO NO ks=http YES YES YES ks=file YES [1] YES [1] YES ks=hd YES YES YES ks=floppy YES YES YES ks=cdrom YES YES NO install? YES YES YES upgrade? YES YES YES [1] . . .
[1] These are technically possible, but are likely *infrequent* use cases. When we start prioritizing the scenarios we want to support, these are the type of tests I'd consider low priority.
Thanks, James
On 05/07/2010 04:51 AM, James Laska wrote:
On Thu, 2010-05-06 at 13:39 +0800, Li Ming wrote:
Hi,
So far, we have almost solved all technical problems for auto
install test. And a workable test driver has been put to git. What we are doing now is to list all possible tests which can meet each deliverable. As discussed with James, we should created these test drivers to map to test deliverables which we get from release engineering:
Nicely done Liam! I like clearly defining the methods by which we initiate testing. There are too many possibilities out there, it helps to pinpoint how it really happens.
- URL install source available *url_install.py
Since a URL install source also includes a "images/boot.iso" file, we probably will need to include: * bootiso_install.py * iso_sanity.py (sanitize the boot.iso)
- DVD image(s) available *iso_sanity.py *dvd_install.py
Howabout some other ISO-based tests ... like * hdiso_install.py (using the DVD iso) * nfsiso_install.py (again, using DVD iso)
- CD image(s) available *iso_sanity.py *cd_install.py
Same as above, but with CD's instead of DVD: * hdiso_install.py (using the DVD iso) * nfsiso_install.py (again, using DVD iso)
Very comprehensive supplement. I will add them. Additionally, can we list these cases according to install test template? https://fedoraproject.org/wiki/QA:Fedora_13_Install_Results_Template
For each test driver, we are trying to list all possible tests, I listed some in wiki, you can see at here:
https://fedoraproject.org/wiki/Is_anaconda_broken_proposal#Roadmap
On "step 3", I listed some possible tests,but I think I must have missed some, do you have some to supplement? For Possible dvd_install.py tests, I try to list all install tests, but for url_install.py and cd_install.py tests ,I only listed some default install,because cd_install.py tests are mainly test boot and swapping disc, url_install.py tests are mainly test remote boot and install. I am not sure whether list all possible tests for url_install.py and cd_install.py, what's your suggestion?
Can we list as many as possible? It'll feel a bit tedious, but hopefully this exercise helps identify the patterns. Things that each test driver will need to do, and things that only 1 (or subset) test driver will need to do.
It might be helpful to experiment with listing this in a wiki table format? Or whatever method will help identify commonalities in the different test drivers. This won't answer the question, "how should we prioritize these tests?".
Test driver
Test dvd_install.py cd_install.py hdiso_install.py ... ============+===============+==============+=================+====== DVD boot YES NO NO CD boot NO YES NO boot.iso boot NO NO NO PXEimage boot NO NO YES stage2=media YES YES NO stage2=http NO NO NO stage2=hd NO NO NO stage2= ... repo=cdrom: YES YES NO repo=http NO NO NO repo=hd NO NO YES repo=nfs NO NO NO repo=nfsiso NO NO NO ks=http YES YES YES ks=file YES [1] YES [1] YES ks=hd YES YES YES ks=floppy YES YES YES ks=cdrom YES YES NO install? YES YES YES upgrade? YES YES YES [1] . . .
[1] These are technically possible, but are likely *infrequent* use cases. When we start prioritizing the scenarios we want to support, these are the type of tests I'd consider low priority.
Very good matrix, I will list a matrix in txt file like you did above, and send to list for review.
Thanks Liam
On 05/07/2010 04:51 AM, James Laska wrote:
On Thu, 2010-05-06 at 13:39 +0800, Li Ming wrote:
Hi,
So far, we have almost solved all technical problems for auto
install test. And a workable test driver has been put to git. What we are doing now is to list all possible tests which can meet each deliverable. As discussed with James, we should created these test drivers to map to test deliverables which we get from release engineering:
Nicely done Liam! I like clearly defining the methods by which we initiate testing. There are too many possibilities out there, it helps to pinpoint how it really happens.
- URL install source available *url_install.py
Since a URL install source also includes a "images/boot.iso" file, we probably will need to include: * bootiso_install.py * iso_sanity.py (sanitize the boot.iso)
- DVD image(s) available *iso_sanity.py *dvd_install.py
Howabout some other ISO-based tests ... like * hdiso_install.py (using the DVD iso) * nfsiso_install.py (again, using DVD iso)
- CD image(s) available *iso_sanity.py *cd_install.py
Same as above, but with CD's instead of DVD: * hdiso_install.py (using the DVD iso) * nfsiso_install.py (again, using DVD iso)
For each test driver, we are trying to list all possible tests, I listed some in wiki, you can see at here:
https://fedoraproject.org/wiki/Is_anaconda_broken_proposal#Roadmap
On "step 3", I listed some possible tests,but I think I must have missed some, do you have some to supplement? For Possible dvd_install.py tests, I try to list all install tests, but for url_install.py and cd_install.py tests ,I only listed some default install,because cd_install.py tests are mainly test boot and swapping disc, url_install.py tests are mainly test remote boot and install. I am not sure whether list all possible tests for url_install.py and cd_install.py, what's your suggestion?
Can we list as many as possible? It'll feel a bit tedious, but hopefully this exercise helps identify the patterns. Things that each test driver will need to do, and things that only 1 (or subset) test driver will need to do.
It might be helpful to experiment with listing this in a wiki table format? Or whatever method will help identify commonalities in the different test drivers. This won't answer the question, "how should we prioritize these tests?".
Test driver
Test dvd_install.py cd_install.py hdiso_install.py ... ============+===============+==============+=================+====== DVD boot YES NO NO CD boot NO YES NO boot.iso boot NO NO NO PXEimage boot NO NO YES stage2=media YES YES NO stage2=http NO NO NO stage2=hd NO NO NO stage2= ... repo=cdrom: YES YES NO repo=http NO NO NO repo=hd NO NO YES repo=nfs NO NO NO repo=nfsiso NO NO NO ks=http YES YES YES ks=file YES [1] YES [1] YES ks=hd YES YES YES ks=floppy YES YES YES ks=cdrom YES YES NO install? YES YES YES upgrade? YES YES YES [1] . . .
[1] These are technically possible, but are likely *infrequent* use cases. When we start prioritizing the scenarios we want to support, these are the type of tests I'd consider low priority.
I created a matrix which will map to all install test in install template.Are these cases enough? This matrix is only a draft,and in the matrix, I am not sure which test driver should support which test case , we can discuss this. Anyone who saw the mistakes in matrix,please correct me. More details ,please see attachment,open with OpenOffice.
Thanks Liam
On Fri, 2010-05-07 at 17:04 +0800, Li Ming wrote:
On 05/07/2010 04:51 AM, James Laska wrote:
On Thu, 2010-05-06 at 13:39 +0800, Li Ming wrote:
Hi,
So far, we have almost solved all technical problems for auto
install test. And a workable test driver has been put to git. What we are doing now is to list all possible tests which can meet each deliverable. As discussed with James, we should created these test drivers to map to test deliverables which we get from release engineering:
Nicely done Liam! I like clearly defining the methods by which we initiate testing. There are too many possibilities out there, it helps to pinpoint how it really happens.
- URL install source available *url_install.py
Since a URL install source also includes a "images/boot.iso" file, we probably will need to include: * bootiso_install.py * iso_sanity.py (sanitize the boot.iso)
- DVD image(s) available *iso_sanity.py *dvd_install.py
Howabout some other ISO-based tests ... like * hdiso_install.py (using the DVD iso) * nfsiso_install.py (again, using DVD iso)
- CD image(s) available *iso_sanity.py *cd_install.py
Same as above, but with CD's instead of DVD: * hdiso_install.py (using the DVD iso) * nfsiso_install.py (again, using DVD iso)
For each test driver, we are trying to list all possible tests, I listed some in wiki, you can see at here:
https://fedoraproject.org/wiki/Is_anaconda_broken_proposal#Roadmap
On "step 3", I listed some possible tests,but I think I must have missed some, do you have some to supplement? For Possible dvd_install.py tests, I try to list all install tests, but for url_install.py and cd_install.py tests ,I only listed some default install,because cd_install.py tests are mainly test boot and swapping disc, url_install.py tests are mainly test remote boot and install. I am not sure whether list all possible tests for url_install.py and cd_install.py, what's your suggestion?
Can we list as many as possible? It'll feel a bit tedious, but hopefully this exercise helps identify the patterns. Things that each test driver will need to do, and things that only 1 (or subset) test driver will need to do.
It might be helpful to experiment with listing this in a wiki table format? Or whatever method will help identify commonalities in the different test drivers. This won't answer the question, "how should we prioritize these tests?".
Test driver
Test dvd_install.py cd_install.py hdiso_install.py ... ============+===============+==============+=================+====== DVD boot YES NO NO CD boot NO YES NO boot.iso boot NO NO NO PXEimage boot NO NO YES stage2=media YES YES NO stage2=http NO NO NO stage2=hd NO NO NO stage2= ... repo=cdrom: YES YES NO repo=http NO NO NO repo=hd NO NO YES repo=nfs NO NO NO repo=nfsiso NO NO NO ks=http YES YES YES ks=file YES [1] YES [1] YES ks=hd YES YES YES ks=floppy YES YES YES ks=cdrom YES YES NO install? YES YES YES upgrade? YES YES YES [1] . . .
[1] These are technically possible, but are likely *infrequent* use cases. When we start prioritizing the scenarios we want to support, these are the type of tests I'd consider low priority.
I created a matrix which will map to all install test in install template.Are these cases enough? This matrix is only a draft,and in the matrix, I am not sure which test driver should support which test case , we can discuss this. Anyone who saw the mistakes in matrix,please correct me. More details ,please see attachment,open with OpenOffice.
Wow, nice matrix Liam. I'm not a huge fan of working in open office documents, so I've migrated the content to the wiki at https://fedoraproject.org/wiki/User:Jlaska/Draft You don't need to switch to the wiki, I'm just more comfortable there.
Caution, just as with the openoffice.org document, it's a BIG table. Some thoughts ...
1. We may want to break up the table to make it a bit more readable ... perhaps something to consider after it's considered final 2. Since this is supposed to list all possible tests, there are a lot of cells that could be green. I'll try to make a few updates as F13 permits 3. There are some tests we could be doing that aren't capturing in the current test plan ... I'll try to highlight them in the link above 4. Not all the wiki testcase links work yet ... again, I'll try to clean those up in the link above
Thanks, James
On 05/11/2010 08:47 AM, James Laska wrote:
Wow, nice matrix Liam. I'm not a huge fan of working in open office documents, so I've migrated the content to the wiki at https://fedoraproject.org/wiki/User:Jlaska/Draft You don't need to switch to the wiki, I'm just more comfortable there.
Caution, just as with the openoffice.org document, it's a BIG table. Some thoughts ...
1. We may want to break up the table to make it a bit more readable ... perhaps something to consider after it's considered final
Yes, the purpose of this matrix is to list all tests, try to avoid missing some tests. As you said,it's a big table, we should create a small size and readable document to Auto Install wiki at last.
2. Since this is supposed to list all possible tests, there are a lot of cells that could be green. I'll try to make a few updates as F13 permits
Actually, I am not sure whether some test drivers should support some tests,let's make more cells green.I will do some updates later.
3. There are some tests we could be doing that aren't capturing in the current test plan ... I'll try to highlight them in the link above
4. Not all the wiki testcase links work yet ... again, I'll try to clean those up in the link above
I did some modification to make the links available. Thanks for this wiki table, very impressive to see it, it's more readable,such as we can access the cases via the link,the green and red cells,etc...
Thanks Liam
On Tue, 2010-05-11 at 11:50 +0800, Li Ming wrote:
On 05/11/2010 08:47 AM, James Laska wrote:
Wow, nice matrix Liam. I'm not a huge fan of working in open office documents, so I've migrated the content to the wiki at https://fedoraproject.org/wiki/User:Jlaska/Draft You don't need to switch to the wiki, I'm just more comfortable there.
Caution, just as with the openoffice.org document, it's a BIG table. Some thoughts ...
1. We may want to break up the table to make it a bit more readable ... perhaps something to consider after it's considered final
Yes, the purpose of this matrix is to list all tests, try to avoid missing some tests. As you said,it's a big table, we should create a small size and readable document to Auto Install wiki at last.
2. Since this is supposed to list all possible tests, there are a lot of cells that could be green. I'll try to make a few updates as F13 permits
Actually, I am not sure whether some test drivers should support some tests,let's make more cells green.I will do some updates later.
3. There are some tests we could be doing that aren't capturing in the current test plan ... I'll try to highlight them in the link above
4. Not all the wiki testcase links work yet ... again, I'll try to clean those up in the link above
I did some modification to make the links available. Thanks for this wiki table, very impressive to see it, it's more readable,such as we can access the cases via the link,the green and red cells,etc...
I haven't had a chance to revisit the matrix yet, but I wanted to point out a thread started on anaconda-devel list that highlights the test permutations I'm hoping to capture in your matrix.
https://www.redhat.com/archives/anaconda-devel-list/2010-May/msg00324.html
Thanks, James
On Thu, 2010-05-13 at 16:03 -0400, James Laska wrote:
On Tue, 2010-05-11 at 11:50 +0800, Li Ming wrote:
I did some modification to make the links available. Thanks for this wiki table, very impressive to see it, it's more readable,such as we can access the cases via the link,the green and red cells,etc...
I haven't had a chance to revisit the matrix yet, but I wanted to point out a thread started on anaconda-devel list that highlights the test permutations I'm hoping to capture in your matrix.
https://www.redhat.com/archives/anaconda-devel-list/2010-May/msg00324.html
Taking into account the information on the anaconda-devel-list thread, I've played around with the test matrix a bit. I've moved it back into your original .ods format, since it's too painful to manage on the wiki at this point.
Changes with this version include: * Explicitly listing each vmlinuz+initrd.img methods, stage2= methods and repo= methods * Removed other tests from the matrix that don't have an immediate impact on the above three decisions (we can add them back in) * Not listing things by test driver just yet (I'll add that detail later)
Are the supported test scenarios correctly listed so far?
Thanks, James
On Thu, 2010-05-13 at 16:03 -0400, James Laska wrote:
On Tue, 2010-05-11 at 11:50 +0800, Li Ming wrote:
I did some modification to make the links available. Thanks for this wiki table, very impressive to see it, it's more readable,such as we can access the cases via the link,the green and red cells,etc...
I haven't had a chance to revisit the matrix yet, but I wanted to point out a thread started on anaconda-devel list that highlights the test permutations I'm hoping to capture in your matrix.
https://www.redhat.com/archives/anaconda-devel-list/2010-May/msg00324.html
Taking into account the information on the anaconda-devel-list thread, I've played around with the test matrix a bit. I've moved it back into your original .ods format, since it's too painful to manage on the wiki at this point.
Changes with this version include: * Explicitly listing each vmlinuz+initrd.img methods, stage2= methods and repo= methods * Removed other tests from the matrix that don't have an immediate impact on the above three decisions (we can add them back in) * Not listing things by test driver just yet (I'll add that detail later)
Are the supported test scenarios correctly listed so far?
Thanks, James
On 05/18/2010 04:42 AM, James Laska wrote:
Taking into account the information on the anaconda-devel-list thread, I've played around with the test matrix a bit. I've moved it back into your original .ods format, since it's too painful to manage on the wiki at this point.
Changes with this version include: * Explicitly listing each vmlinuz+initrd.img methods, stage2= methods and repo= methods * Removed other tests from the matrix that don't have an immediate impact on the above three decisions (we can add them back in) * Not listing things by test driver just yet (I'll add that detail later)
Are the supported test scenarios correctly listed so far?
You listed all test scenarios, if the test drivers support all these test scenarios, we can automate most of our manual install test cases.The list is pretty good this time. Although did not list all tests, the remaining tests can be handled by kickstart file. What's your thought about the next step? We define the priorities for each tests in test scenarios and then select high priority to write test drivers to support them first?
Thanks Liam
On Tue, 2010-05-18 at 15:52 +0800, Li Ming wrote:
On 05/18/2010 04:42 AM, James Laska wrote:
Taking into account the information on the anaconda-devel-list thread, I've played around with the test matrix a bit. I've moved it back into your original .ods format, since it's too painful to manage on the wiki at this point.
Changes with this version include: * Explicitly listing each vmlinuz+initrd.img methods, stage2= methods and repo= methods * Removed other tests from the matrix that don't have an immediate impact on the above three decisions (we can add them back in) * Not listing things by test driver just yet (I'll add that detail later)
Are the supported test scenarios correctly listed so far?
You listed all test scenarios, if the test drivers support all these test scenarios, we can automate most of our manual install test cases.The list is pretty good this time.
Although did not list all tests, the remaining tests can be handled by kickstart file.
I believe the strength and largest complexity in this test suite will involve preparing and validating the test environment. I've left out the remaining tests that I didn't think play a *strong* role in determining the test environment. However, I may have missed some important test scenarios. Please feel free to call them out.
Some considerations not addressed in the matrix.
1. Networking - both as a command-line argument and a kickstart keyword * We'll eventually want to test the command-line arguments and kickstart values for expected results at some point (e.g. request static-ip, confirm you got static ip). I'm comfortable that the proposed model could be used to test these scenarios. * Networking is also implicit in some of the other values. For example, anything specifying remote stage2/updates.img/kickstart/packages will need an active network. In this case, we may want to test: A. single NIC, with DHCP B. single NIC, no DHCP C. multiple NIC's 2. Determining stage2 and repo selection from kickstart * I only specified command-line arguments stage2= and repo= in the matrix, but these values can also be gathered from the contents of the kickstart file. So we may also want tests that provide no boot arguments, and only kickstart values. This seems like a detail we can handle later. 3. Other kickstart commands - there are a *lot* of kickstart commands that have no impact on the install experience, and only modify the installed system. It may be worthwhile to detail the commands that don't affect install future. They'll need to be validated, but not a priority
What's your thought about the next step? We define the priorities for each tests in test scenarios and then select high priority to write test drivers to support them first?
Good question. What I hoped to identify with our matrices is that just about each method (stage2=, repo=, ks=, updates=) is supported in every boot method (aka test driver). LiveCD being the only notable exception.
= Conclusions =
The matrix makes it pretty clear to me that the test suite needs to support just about every test listed so far. This tells me that, for our sanity, and to support externally contributed tests, we'll need ...
1. A comprehensive library of shared/common code (expanding the existing virtguest.py and more) 2. Small, easy to instantiate, test code (ideally, 200 lines of code or less)
I list #1, because as noted, if each boot method needs to support all stage2= methods, I think we'd all prefer writing the code to support that just once.
I list #2, because if each test is 800 lines of code, chances are slim that we'll be able to maintain the code and expect external test contributions. This is also in line with the original project definition [1].
Any other conclusions to draw from the current matrix (see attached for updated version)?
= Next steps =
We define the priorities for each tests in test scenarios and then select high priority to write test drivers to support them first?
That sounds good. Given the current matrix, the first priority would be implementing test drivers that satisfy the listed defaults. That can be later extended to support the next priority tests. What do you think?
Also, if you have a strong desire to get started with writing code, the easier tests would be the iso_sanity tests. Those could probably be automated the fastest.
Thanks, James
[1] https://fedoraproject.org/wiki/Is_anaconda_broken_proposal#Overview
On 05/19/2010 04:39 AM, James Laska wrote:
On Tue, 2010-05-18 at 15:52 +0800, Li Ming wrote:
On 05/18/2010 04:42 AM, James Laska wrote:
Taking into account the information on the anaconda-devel-list thread, I've played around with the test matrix a bit. I've moved it back into your original .ods format, since it's too painful to manage on the wiki at this point.
Changes with this version include: * Explicitly listing each vmlinuz+initrd.img methods, stage2= methods and repo= methods * Removed other tests from the matrix that don't have an immediate impact on the above three decisions (we can add them back in) * Not listing things by test driver just yet (I'll add that detail later)
Are the supported test scenarios correctly listed so far?
You listed all test scenarios, if the test drivers support all these test scenarios, we can automate most of our manual install test cases.The list is pretty good this time.
Although did not list all tests, the remaining tests can be handled by kickstart file.
I believe the strength and largest complexity in this test suite will involve preparing and validating the test environment. I've left out the remaining tests that I didn't think play a *strong* role in determining the test environment. However, I may have missed some important test scenarios. Please feel free to call them out.
Some considerations not addressed in the matrix.
1. Networking - both as a command-line argument and a kickstart keyword * We'll eventually want to test the command-line arguments and kickstart values for expected results at some point (e.g. request static-ip, confirm you got static ip). I'm comfortable that the proposed model could be used to test these scenarios. * Networking is also implicit in some of the other values. For example, anything specifying remote stage2/updates.img/kickstart/packages will need an active network. In this case, we may want to test: A. single NIC, with DHCP B. single NIC, no DHCP C. multiple NIC's 2. Determining stage2 and repo selection from kickstart * I only specified command-line arguments stage2= and repo= in the matrix, but these values can also be gathered from the contents of the kickstart file. So we may also want tests that provide no boot arguments, and only kickstart values. This seems like a detail we can handle later. 3. Other kickstart commands - there are a *lot* of kickstart commands that have no impact on the install experience, and only modify the installed system. It may be worthwhile to detail the commands that don't affect install future. They'll need to be validated, but not a priority
What's your thought about the next step? We define the priorities for each tests in test scenarios and then select high priority to write test drivers to support them first?
Good question. What I hoped to identify with our matrices is that just about each method (stage2=, repo=, ks=, updates=) is supported in every boot method (aka test driver). LiveCD being the only notable exception.
= Conclusions =
The matrix makes it pretty clear to me that the test suite needs to support just about every test listed so far. This tells me that, for our sanity, and to support externally contributed tests, we'll need ...
1. A comprehensive library of shared/common code (expanding the existing virtguest.py and more) 2. Small, easy to instantiate, test code (ideally, 200 lines of code or less)
I list #1, because as noted, if each boot method needs to support all stage2= methods, I think we'd all prefer writing the code to support that just once.
I list #2, because if each test is 800 lines of code, chances are slim that we'll be able to maintain the code and expect external test contributions. This is also in line with the original project definition [1].
Any other conclusions to draw from the current matrix (see attached for updated version)?
= Next steps =
We define the priorities for each tests in test scenarios and then select high priority to write test drivers to support them first?
That sounds good. Given the current matrix, the first priority would be implementing test drivers that satisfy the listed defaults. That can be later extended to support the next priority tests. What do you think?
Also, if you have a strong desire to get started with writing code, the easier tests would be the iso_sanity tests. Those could probably be automated the fastest.
James,you must have seen that the attached matrix was put in: https://fedoraproject.org/wiki/Is_anaconda_broken_roadmap
Currently, we are on step 4. We can do some adjustment to step#2 and step#4 at this time if we find something is not appropriate.Then, we will go for step #5. Now,this document is pretty much clear for what we need and what we will do next.
Thanks Liam
autoqa-devel@lists.fedorahosted.org