On Thu, Jan 12, 2017 at 7:42 AM, Tim Flink <tflink@redhat.com> wrote:
The idea was to start with static site generation because it doesn't
require an application server, is easy to host and likely easier to
develop, at least initially.

I don't really have a strong preference either way, just wanted to say that "initial development" time is the same for web app, and for static generated pages - it both does the same thing - takes an input + output template and produces output. You can't really get around that from what I'm seeing here. Static generated page equals cached data in the app, and for the starters we can go on using just the stupidest of caches provided in Flask (even though it might well be cool and interesting to use some document store later on, but that's premature optimization now).
 
>    After brief discussion with jskladan, I understand that resultsDB
> would be able to handle requests from dynamic page.

Sure but then someone would have to write and maintain it. The things
that drove me towards static site generation are:

Write and maintain what? I'm being sarcastic here, but this sounds like the code for static generated pages will not have to be written and maintained... And once again - the actual code that does the actual thing will be the same, regardless of whether the output is a web page, or a http response.

> * I'm not sure what exactly is meant by 'item tag' in the examples
> section.
>
> * Would the YAML configuration look something like this:
>
>    url: link.to.resultsdbapi.org
>    overview:
>    - testplan:
>      - name: LAMP
>      - items:
>        - mariadb
>        - httpd
>      - tasks:
>        - and:
>          - rpmlint
>          - depcheck
>          - or:
>            - foo
>            - bar

I was thinking more of the example yaml that's in the git repo at
taskdash/mockups/yamlspec.yml [1] but I'm not really tied to it strongly
- so long as it works and the format is easy enough to understand.


I guess I know where you were going with that example, but it is a bit lacking. For one all it really allows for is "hard and" relationship between the testcases in the testplan (dashboard, call it whatever you like), which might be enough, but with what was said here it will start being insufficient pretty fast. The other thing is, that we really want to be able to do the "item selection" in some way. We sure could say "take all results for all these four testcases, and produce a line-per-item" but that is so broad, that it IMO stops making sense anywhere beyond the "global" (read applicable to all the items in the resutsdb) testplans.
 
>    Is there going to be any additional grouping (for example, based
> on arch) or some kind of more precise outcome aggregation (only warn
> if part of testplan is failing, etc.)

Maybe but I think those features can be added later. Are you of the
mind that we need to take those things into account now?


I don't really think that they can. Take a simple "gating" dashboard for example. There is a pretty huge difference between "package passes, if rpmlint, depcheck and abicheck pass on it" and "package passses if rpmlint, depcheck and abicheck pass for all the required arches". And I'm certain we want to be able to do the latter. Like it is not really "pass" when rpmlint passed on ARM, depcheck on X86_64 and abicheck on i386, but all the other combinations failed.

It might seem like unnecessarily overcomplicating things, but I don't thin that the dashboard-generating tool should make assumptions (like that grouping by arch is what you want to do) - it should be spelled out in the input format, so there is as much black box removed as possible.
Will it take more time to write the input? Sure. Is it worth it? Absolutely.

 
> * Are we going to generate the dashbord for the latest results only,
> or/and some kind of summary over given period in history?

For now, the latest results. In my mind, we'd be running the dashboard
creation on a cron job or in response to fedmsgs. At that point, we'd
date the generated dashboards and keep a record of those without
needing a lot more complexity

The question here is "what is latest results"? Do we just take now-month for the first run, and then "update" on top of that? I would not necessarily have a problem with that, it's just that we most deffinitely would want to capture _some_ timespan, and I think this is more about "what timespan it its".
If we decide to go with "take the old state, apply updates on top of that", then we will (I think) pretty fast arrive to a point where we mirror the data from ResultsDB, just in a different format, stored in a document store instead of relational database. Not saying it's a bad or wrong thing to do. I actually think it's a pretty good solution - better than querying increasingly more data from ResultsDB anyway.