On 03/14/2013 11:02 AM, Martin Preisler wrote:
> I do not subscribe to the theory that remote fetches are a potential
> risk. I will be happy to defend this carefree attitude vigorously
> and at length.
This is far worse than a potential risk, DNS poisoning is not just theoretical ;-). All
it takes is a one line change of /etc/hosts either on the DNS server that is used locally
or even on the machine. Or setting up a transparent reverse-proxy somewhere in the chain
between you and the content provider. Then all compliance checks can pass forever and ever
without anyone noticing.
Indeed. These risks apply to any retrieval of information over a
network. My focus has been efficient access to compliance content within
an enterprise, in which such risks can be more easily addressed.
Apologies: I was unclear. I should have said "I do not subscribe to the
theory that remote fetches are an unmanageable risk". I remain carefree
and cheerful. Networks are difficult to do without, a prospect I do not
intend to explore.
It also creates a single point of failure. What if someone takes over
the domain that is used to fetch the files? What if the hosting provider is hacked? Do you
think it's impossible for NIST to have any of these problems? 
Yes. It does.
I have recently been hoist on my own petard of customarily using NIST as
a declared source of the XCCDF schemata (currently inaccessible on the
NIST site). I had not implemented something like
in all venues until I
started receiving unpleasant notifications from schema checks. The
problem has endured for some days now.
> Even if one warrants that remote fetches offer some risks, these
> could be obviated by providing a mode of operation for those who
> fear them, as has already been done for href attributes, and an
> obverse mode for those who embrace them.
I would dare say that people who do not fear those risks shouldn't be in this
business. After all you are doing security compliance checking. If you are happy to enable
external factors (your ISP, hosting provider, ...) to affect your results it makes no
sense to even check.
Well, I shall regardless keep cheerfully buggering on in this business.
I do not think external factors render this senseless.
The "externality" of such factors is diminished within an enterprise.
Unless one can produce one's own content, it will have to be retrieved
at some point from its origin. For example, I routinely obtain the most
recent updates to scap-security-guide and openscap from fedorahosted.com
using git. Obviating such risk in an enterprise environment could take
the form of trust in an RPM repository. Ultimately, one must choose what
risks to take. Digital signatures offer integrity assurance (such as it
is), which the relying party can trust as much they choose to do so.
It can also expose information about your internal infrastructure to the outside.
Yes. Assuming the information is allowed "outside".
Again, I think an enterprise could competently provide content for
itself without incurring many of the risks you itemize.
However, consider consumer devices for which no "enterprise" exists.
Information (e.g., firmware) must be obtained in a fashion that either
obviates or adequately mitigates such risks. All such information is
"external". Are not these risks simply displaced elsewhere, and how
shall they be addressed in that other place?
If, in an enterprise, I have a large number of systems which I wish to
perform regular compliance checking, I would like to have a single
distribution point from which a definitive version of the content can be
obtained. I would like all systems to use the most recent version of the
content as soon as possible after any revision.
I could place it on a web server, or I could distribute it by some other
means. Open SCAP currently expects it to be local to the system on which
the evaluation is to take place (or perhaps it can be on a networked
file system). How does it get there without incurring some or all of the
same risks mentioned? How is the overall risk diminished? I will warrant
that a comprehensive distribution scheme for any and all content (e.g.,
sofware, configuration files, security compliance checking instructions)
would force such risks into a single place where they could there be
addressed, rather than in more than one place.
Is this the manner in which you envision such risk being addressed? À la
RPM? Using a single repository? Customary trust anchor management?
> I will also (at the risk of citing a document for which I have
> misgivings) mention that NIST SP 800-126r2 §3.10 provides a method
> by which document integrity can be assured. Please do not read any
> other part of that publication. Prefer the prior version for now.
We are aware of this and efforts have been made to make this work. Unfortunately it gets
very complicated very quickly. There are countless ways to sign the content and there are
countless ways to check.
While their are countless ways, there are fewer effective ways, and of
those a few sufficient ways. The presence of <fix> content boosts the
importance of such methods.
If, as you say, there is always risk when retrieving any information
over a network, digital signatures provide integrity, and encrypted
channels provide confidentiality. Availability can be adjusted to match
SLAs. I will warrant that some information will leak at network layer 3
(a scenario of less concern to enterprises which coerce such
distribution to occur solely within the enterprise networks).
I won't bother to cover trust anchor management and all its problems at