I haven't spent enough time to digest and understand everything that has been said here, but I can speak a bit about ResultsDB/ExecDB/etc.

The main reason for creating ExecDB was because we've had a ton of infra errors as "results" in ResultsDB, and that bothered us. We initially dumped everything into ResultsDB, so when e.g. a VM client couldn't be created, or dnf repos couldn't be reached for installing prerequisite packages, we posted it as ERROR/CRASHED/etc results in ResultsDB. It was the easiest way to access error logs, etc. After some time, our database was drowning in errors, which brought performance issues, readability issues (when trying to navigate the results in the web UI), and overall we thought these issues should be separated - execution status vs actual test results. Our tools and our infra reliability slowly improved, and we moved the execution tracking to ExecDB. So the only time our Taskotron tasks create a result in ResultsDB is when the execution proceeds smoothly and the test creates a proper results file (the test can opt to not create it, then no results is reported). But every time there's an entry in ExecDB. That brought a bit more sanity into test results management, in our view. However, please note that our ExecDB is quite bare-bones, for example its web UI doesn't have a search functionality, and you have to know the UUID and how to construct the URL in order to access the necessary details. It's far from being easy to use for anyone but core contributors. Our goal was for package maintainers to be able to search in it and figure out why there's no results in ResultsDB for their package/test. But that never happened.

But infra and similar errors was not what is discussed here. IIUIC you're talking "ignored" or "nothing to report" results (the first one might be known to the scheduler, the second one might be discovered by the test). And we actually submit those to ResultsDB too, even from Taskotron tasks. As an example, see abicheck results:
Anything that has "no binary RPMs" or "no publicly exported ABI" in Note, that's basically a "nothing to report" result. Either it was not a C program, or it didn't have public libraries. In both cases there's no need to run abicheck on them.
Another example are python-versions results:
We run those on all packages, and it automatically gives PASSED for anything that doesn't contain Python files.

We could be simply not sending those results to ResultsDB (and even better, we could avoid executing those tests at all), but we'd have to make sure the package maintainers are provided with this information (that this testcase is "automatically passing" for your package) and that the gating arbiter is also aware. So of course we chose the "horribly inefficient but simple to implement" way and submit everything. There are optimizations that can be made, for example decide whether to run C or Python tests based on rpm filelist or rpm requires. Such code should ideally be a library that is then shared between the test system scheduler, the gating arbiter and optionally any user oriented UI (like Bodhi). This would also introduce more points of failure, because you'd (likely) depend on additional remote services (like Koji). That's why I'm not surprised if Fedora CI sends "ignored, therefore passed" results for packages which don't have a test suite in distgit. It's the easiest solution.

Of course your proposed case with openqa's 'desktop terminal' testcase is even more problematic, because it's not run every time (as opposed to previous examples). It's going to be interesting to figure out a way to handle this in the gating process in a robust fashion, and I'm not going to claim I have good answers for that. But I'm a firm believer that "let's wait X minutes and then consider it passed" is something we definitely shouldn't do :)

It is *possible* to solve this, I guess. My first thought about how to
do that would be to actually add this feature to openQA. It would be a
pretty weird API request - basically "Here is a request that looks like
the one we send when we want you to run some tests. Now, we want you to
explicitly **NOT** run these tests, then report exactly what it is that
you didn't do". :P

Internally it'd just sort of hook into the job creation code, only it
wouldn't actually do the step where it makes the created jobs 'real';
it'd just create the sort of 'prospective' jobs, send out internal
events, and produce a response to the request, then just throw them
away. It probably wouldn't actually be too hard to do, it'd just be a
rather...odd thing to have.

So how is this different from say `ansible-playbook --list-tasks`? Or anything with --dry-run? I think it's very reasonable to be able to ask the scheduler "what jobs would you schedule with these input arguments?". It could be one way how to make greenwave aware of which tests to require for a specific package/compose. It's somewhat inflexible because it requires greenwave calling into openqa and rely on openqa code execution, but it's better than nothing. A better approach would be to have openqa scheduler as a standalone tool which consumes some configuration and you can easily run it locally, but that might be a serious engineering effort.

As a side note, this is exactly why we scrapped AutoQA and started Taskotron. The AutoQA scheduler has been fully programmable, Turing-complete, and each task could decide whether to run or not based on passed arguments. It has been a massive pain to tell in advance what's going to run, and any small code error could send the whole thing tumbling down. We separated the scheduler into a standalone project (taskotron-trigger) and made the configuration yaml-based and much less powerful. But it's now much easier to see what's running when just in a glance, and the internal logic could be used as a library in a different project.