Configuration testing vs Forensic testing

Jan Lieskovsky jlieskov at redhat.com
Wed Dec 17 13:24:32 UTC 2014


----- Original Message -----
> From: "Steve Grubb" <sgrubb at redhat.com>
> To: "Jan Lieskovsky" <jlieskov at redhat.com>
> Cc: "SCAP Security Guide" <scap-security-guide at lists.fedorahosted.org>
> Sent: Tuesday, December 16, 2014 5:20:23 PM
> Subject: Re: Configuration testing vs Forensic testing
> 
> Hello,
> 
> TL;DR - OVAL is limited in its capabilities. The prose must match what OVAL
> can do.
> 
> 
> On Tuesday, December 16, 2014 10:18:32 AM Jan Lieskovsky wrote:
> > First let me summary that:
> > * it's great we agreed on the need to separate configuration vs
> >   runtime checks,
> > * we identified the areas which needs fixing.
> > 
> > But obvious question being what level of separation is required:
> > * 1) IOW should each existing rule be turned into a new group, consisting
> >   of two rules - one for configuration testing, one for runtime testing.
> >   Then the description of the group would be more generalized form of
> >   the check, where each of the two new rules would be described according
> >   to the way they perform the check - IOW the configuration one would
> > mention checking configuration files, while the runtime one would focus on
> > system actions checking runtime state,
> 
> Security guides are always about how to set the system up so that it boots
> into the correct configuration. Checking for deviations in enforcement is
> sometimes covered, but usually not. There is a little used category of APT
> that is Tier III and mostly Windows content. This is the category where that
> kind of content belongs. IOW, its not STIG or USGCB or PCI.
> 
> Content like a STIG or USGCB is supposed to be a baseline which is all about
> how the system boots up. Its content should be pretty slow moving. The APT
> category on the other hand is for faster moving guidance on new threats.
> 
> 
> > * 2) or is it sufficient to mention in the (HTML version of the guide) that
> > the current implementation checks just configuration status (AND the
> > runtime state where appropriate) and basically do no changes in current
> > XCCDF / OVAL rules implementation,
> > * 3) another options / possibility (as
> > pointed out by Simon Lukasik - thanks for it!) is the following - modify
> > the current rules implementation in the way to keep the configuration tests
> > the default ones (IOW when they don't pass the check would fail) and
> > simultaneously make the runtime checks the optional ones. The content user
> > would be able via e.g. an OVAL variable to instruct the scanner what kind
> > of testing should be performed.
> > 
> >   Example:
> > 
> >        "Check system property" rule
> >        if ($runtime_check) not set
> >        then
> >          check just configuration settings
> >        else
> >          check configuration settings
> >          check runtime settings
> >        fi
> > 
> >   And analogous approach (same global OVAL variable) customizable by the
> > SCAP content user would be used for all rules.
> 
> We should not be mixing the two use cases. STIG and USGCB should be the on-
> disk configuration.
> 
>  
> > * 4) another option (but maybe just enhancement of case 1)) is to follow
> > the
> > way to have two dedicated profiles for each of the existing ones (e.g.
> > USGCB-configuration and USGCB-runtime) each of them containing rules from
> > particular category.
> 
> This would be better. But the Forensic case is not an immediate goal or need.
> Its more in the nice to have category. In my view, the content right now
> should only be the on-disk configuration. The prose should reflect how to
> test
> manually in the same way as the SCAP scanner will. Meaning, if the OVAL check
> is a filecontent_test, then the prose should use cat + grep or awk.
> 
> The fact is that you cannot check the in memory configuration via OVAL for
> several things. You can by going to XCCDF and using scripting instead of
> OVAL.
> But this is already a stretch. The intention is to use regular OVAL
> mechanisms
> and then make the prose reflect the same test that the scanners will perform.
> 
> 
> > * 5) another option is to use "runtime / configuration" (or both of them)
> > as
> > suffix in the rule title - so for example:
> >      -- existing "Install Aide" rule would become "Install Aide (runtime)"
> > 
> >      meaning here just runtime check would be performed, while e.g.
> > 
> >      -- existing "Disable the Automounter" would become "Disable the
> > Automounter (configuration, runtime)"
> > 
> >      meaning in this case both configuration & runtime checks were
> > performed.
> > 
> > In my opinion we first need to agree on the way how the separation should
> > be
> > performed in order to:
> > * this separation to be sufficiently clear enough for the content consumers
> > * we don't need to change the approach during its implementation (during
> >   updating actual state to reflect the expectations)
> > 
> > Should I vote for some of the aforementioned approaches to select one I
> > prefer the global OVAL variable approach. E.g. the following:
> > 
> > * Update existing XCCDF rules description to mention / describe only
> > configuration checks,
> 
> The prose must match exactly how OVAL tests it.Otherwise you will get
> differences.
> 
> 
> > * Update OVAL checks to perform just configuration testing by default,
> 
> This is what they should be doing. Many of the issues I mentioned in the
> original email was because the prose did not match the OVAL checks. OVAL has
> limited capabilities. It cannot run auditctl or mount or any other external
> command. So, the prose need to reflect this limitation and be accurate so
> that
> people without a scanner can test by hand and get accurate results. That is
> the main issue I was reporting.

Thank you for the clarification, Steve. It's clear now.

Regards, Jan.
--
Jan iankko Lieskovsky / Red Hat Security Technologies Team

> 
> -Steve
> 


More information about the scap-security-guide mailing list