New report and guide in openscap 1.1.0

Greg Elin gregelin at gitmachines.com
Tue Sep 2 17:12:14 UTC 2014


Thanks Martin. 

I will look at xslt.  

Helpful to know about supporting only a subset and that is why the error details are missing.

Glad to hear xslt cleaned up. It was pretty complex. 


Greg Elin
P: 917-304-3488
E:  gregelin at gitmachines.com

Sent from my iPhone

> On Sep 2, 2014, at 12:22 PM, Martin Preisler <mpreisle at redhat.com> wrote:
> 
> ----- Original Message -----
>> From: "Greg Elin" <gregelin at gitmachines.com>
>> To: "SCAP Security Guide" <scap-security-guide at lists.fedorahosted.org>
>> Sent: Tuesday, September 2, 2014 5:21:23 PM
>> Subject: Re: New report and guide in openscap 1.1.0
>> 
>>> On Mon, Sep 1, 2014 at 5:32 AM, Martin Preisler <mpreisle at redhat.com> wrote:
>>> 
>>> ----- Original Message -----
>>>> From: "Greg Elin" <gregelin at gitmachines.com>
>>>> To: "SCAP Security Guide" <scap-security-guide at lists.fedorahosted.org>
>>>> Sent: Friday, August 29, 2014 11:18:37 PM
>>>> Subject: Re: New report and guide in openscap 1.1.0
>>>> 
>>>> I agree with Trey. The details would be more useful if they contained
>>> more
>>>> information about how something failed. I would expect that in a detail.
>>> 
>>> As I said in my previous email, we show these details. We always have!
>>> 
>>> They were not present in one of the previous ! draft ! versions of
>>> XCCDF report. When I released that draft I explicitly said that check
>>> details
>>> are not there but will be there in the final version. Are you sure you are
>>> looking at the new report?
>> 
>> Thanks for the prodding. I looked more closely last night. At least two of
>> us had some confusion with this.
> 
> Many more people are confused by this AFAIK :-)
> 
>> AFAIK, Oval results only show up if the `--oval-results` is added to `oscap
>> xccdf eval` command. It seems to me specific errors should be shared by
>> default and flag used to hide them. Yes, it is in the man page and of
>> course we all read the man page... (But I guess that is a comment for the
>> OpenSCAP List?)
> 
> Yes, that is the case. We are planning to fix this with Result DataStreams
> at some point in the future but it's not in git yet.
> 
>> Oval details show up on only some failed tests in the sample report (
>> https://mpreisle.fedorapeople.org/openscap/1.1.0_xslt/report.html)
>> "package_aide_installed" for example does not show any oval details though
>> it fails while "service_atd_disabled" does show oval results.
> 
> Our OVAL details implementation supports a subset of all possible OVAL
> objects. I think we support most of the use-cases where you really need to
> know which item is the culprit.
> 
> We may support more at some point in the future. Patches are always welcome.
> 
>> [snip]
>> 
>> It seems OVAL details show up when there are specific "items violating."
>> That is _incredibly_ helpful for a test like "rpm_verify_permissions". It's
>> awesome.
>> 
>> It seems OVAL details do not show up if the control just fails. That is
>> confusing. Didn't an OVAL criterion fail? Which one? Maybe there is more
>> than one criterion for a given test? The fact sometimes OVAL details show
>> up and sometimes they don't seems like yet another secret, undocumented
>> detail that is obvious to those in the know, but confusing to those that
>> aren't.  At the very least, there should always be an explanation of the
>> OVAL detail even if it is "OVAL failed without any detail. Here is the
>> criterion that failed: "
> 
> When we don't show anything in OVAL details it most likely means we don't
> support details for that particular OVAL object.
> 
>>>> I'd be OK with default report ONLY showing details for the severity=high
>>>> items that I failed and all other details went either into a second
>>> report
>>> 
>>> We show check system details for all failed rules, always. This works for
>>> both OVAL and SCE checks.
>> 
>> What I was trying to get at is that I think fails that are high severity
>> should be treated differently in the report than those of medium or high.
>> For example, they could their own section, or -- as I was musing -- be the
>> only fails included in the report.
> 
> I wanted to implement some sort of a "remediation priority list" that the
> report would generate for users on demand. You could print it and go through
> it in order. However I didn't have enough time to implement this in the end.
> 
> If there is customer demand it may happen in the near future.
> 
>> [snip]
>> 
>>> Are you sure you want detailed reports of vulnerabilities on your
>>> infrastructure to end up on the "WEB"? One of our goals is to keep the
>>> reports and guides self sufficient. We do not want to rely on remote tools.
>>> That's why we bundle even all the JavaScript and CSS.
>> No, I do not want results of what controls *my* system passed or failed
>> posted on the web.
>> 
>> I am speaking about the generic content that is a part of each control
>> being on the web:
>> rationale, links, related identifiers, remediation scripts, etc.
>> 
>> The most important information in the results of my scan, are the details
>> about my system and my scan results. Generic information is _generic_. I
>> only want that generic information co-located with details when it
>> specifically helps me take an action.
> 
> This is a general feature/issue with XCCDF and/or Source/Result DataStreams.
> Sorry but we can't fix this in report without making proprietary extensions.
> 
>> Maybe what I am driving at as a general theme here is the difference
>> between an "Action/Working Report" optimized to help me get to work
>> resolving issues and "Complete Report" that represents an artifact
>> capturing a state that is less transitory.
>> 
>> I'll try to mockup what I mean.
> 
> Maybe consider patching the XSLT directly. I have restructured it and it's
> much easier to follow how it works now.
> 
>> [snip]
>> 
>> By static details I am referring to information in the SSG that does not
>> change from system to system or from scan result to scan result.
>> 
>> I feel I should say a bit more about this because the above may seem
>> contradictory.
>> 
>> It might seem I am one hand suggesting more information be added to the
>> report to make the report easier for beginners (e.g., include OVAL detail
>> on all failed tests).
>> 
>> On the other hand, it might seem I am suggesting less information be
>> included in the report (e.g., removing generic static details, or putting
>> medium and low control fails details in a second report).
>> 
>> I am arguing for clarity and less cognitively taxing presentation that does
>> way better than most other scan reports out there.
>> 
>> Only some fails having an OVAL detail section seems ambiguous. Meanwhile
>> including all explanatory detail for every item often seems extraneous.
>> 
>> If I knew the perfect algorithm, I would share it. I do know what I want to
>> read most of all in the report is :  "The control ____ failed because
>> __________" and be able to follow a link to get more detail if I happen to
>> need it.
> 
> OK, this clarification helped a lot. The only superfluous detail I know about
> is rule description. And we only show this in the modal dialog. Titles, IDs,
> references, identifiers, remediation fixes, severity, ... were all
> feature requests.
> 
>> I think I really want a simple report that lists what failed and why, and
>> then a second report (or hidden second half of the report) that has way
>> more details.
> 
> Having multiple reports adds maintenance costs. But maybe we can hide this
> additional information under a [+] button or something like that.
> 
> -- 
> Martin Preisler
> -- 
> SCAP Security Guide mailing list
> scap-security-guide at lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/scap-security-guide
> https://github.com/OpenSCAP/scap-security-guide/


More information about the scap-security-guide mailing list