So this already got committed. It may make more sense to generate the zipfile on local hosts as part of RPM %install, or in individual subproject Makefiles.
The relevant standards doc is: http://nvlpubs.nist.gov/nistpubs/ir/2013/NIST.IR.7511.pdf ...which does state that providing a zipfile is a reasonable way to provide content.
But more pressing is fixing the makewhatis build error. We could perhaps look at whatever real software developers do (like OpenSCAP) to see how to package up man pages. Or a short term hack like using bash tests, something like: if [ $EUID -eq 0 ]
On 02/16/2013 11:50 AM, Shawn Wells wrote:
scap-security-guide mailing list scap-security-guide@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/scap-security-guide
On 02/21/2013 01:28 AM, Jeffrey Blank wrote:
So this already got committed. It may make more sense to generate the zipfile on local hosts as part of RPM %install, or in individual subproject Makefiles.
The relevant standards doc is: http://nvlpubs.nist.gov/nistpubs/ir/2013/NIST.IR.7511.pdf ...which does state that providing a zipfile is a reasonable way to provide content.
Hello Jeff,
I failed to found this exact proposition. Could you please help me to find it in that document? Or any other in some of the SCAP standard manuals?
I am interested because I am looking into support of zip file in the OpenSCAP (no promises, just looking there).
Thanks!
-- Simon Lukasik Security Technologies
On 03/11/2013 11:06 AM, Simon Lukasik wrote:
On 02/21/2013 01:28 AM, Jeffrey Blank wrote:
So this already got committed. It may make more sense to generate the zipfile on local hosts as part of RPM %install, or in individual subproject Makefiles.
The relevant standards doc is: http://nvlpubs.nist.gov/nistpubs/ir/2013/NIST.IR.7511.pdf ...which does state that providing a zipfile is a reasonable way to provide content.
Hello Jeff,
I failed to found this exact proposition. Could you please help me to find it in that document? Or any other in some of the SCAP standard manuals?
I just checked IR 7511 as well as all three versions of SP 800-126 and can find no normative citation.
IR 7511 has a single parenthetical reference in §3.3.1.
The NIST SCAP validation tool accepts both individual documents and ones collected into a zip archive.
NIST has customarily provided FDCC and USGCB content as zip archives. However, archives are not mandated by the relevant standards.
I am interested because I am looking into support of zip file in the OpenSCAP (no promises, just looking there).
What would be quite useful is to specify content using a URI with support for http, https, and file schemes (file being the default scheme for unadorned file names). While additional schemes besides those (perhaps zip or jar) might prove useful, my personal preference would be for http, https, and file.
On 03/11/2013 05:37 PM, Gary Gapinski wrote:
I am interested because I am looking into support of zip file in the
OpenSCAP (no promises, just looking there).
What would be quite useful is to specify content using a URI with support for http, https, and file schemes (file being the default scheme for unadorned file names). While additional schemes besides those (perhaps zip or jar) might prove useful, my personal preference would be for http, https, and file.
oscap already supports remote URI if the URI is used within check-content-ref/@href. You can enable this functionality by --fetch-remote-resources command-line option.
However, I am afraid to allow remote URI even on the command-line. Some of the risks are discussed at https://fedorahosted.org/openscap/ticket/213 .
Hi,
[snip]
I do not subscribe to the theory that remote fetches are a potential risk. I will be happy to defend this carefree attitude vigorously and at length.
This is far worse than a potential risk, DNS poisoning is not just theoretical ;-). All it takes is a one line change of /etc/hosts either on the DNS server that is used locally or even on the machine. Or setting up a transparent reverse-proxy somewhere in the chain between you and the content provider. Then all compliance checks can pass forever and ever without anyone noticing.
It also creates a single point of failure. What if someone takes over the domain that is used to fetch the files? What if the hosting provider is hacked? Do you think it's impossible for NIST to have any of these problems? [1]
Even if one warrants that remote fetches offer some risks, these could be obviated by providing a mode of operation for those who fear them, as has already been done for href attributes, and an obverse mode for those who embrace them.
I would dare say that people who do not fear those risks shouldn't be in this business. After all you are doing security compliance checking. If you are happy to enable external factors (your ISP, hosting provider, ...) to affect your results it makes no sense to even check.
It can also expose information about your internal infrastructure to the outside.
I will also (at the risk of citing a document for which I have grave misgivings) mention that NIST SP 800-126r2 §3.10 provides a method by which document integrity can be assured. Please do not read any other part of that publication. Prefer the prior version for now.
We are aware of this and efforts have been made to make this work. Unfortunately it gets very complicated very quickly. There are countless ways to sign the content and there are countless ways to check. For now we recommend this: https://www.redhat.com/archives/open-scap-list/2012-December/msg00000.html
Although I do admit that I would prefer to have this in openscap itself and always enabled.
[1] http://www.theregister.co.uk/2013/03/14/us_malware_catalogue_hacked/
Hello, Martin:
On 03/14/2013 11:02 AM, Martin Preisler wrote:
Hi,
[snip]
I do not subscribe to the theory that remote fetches are a potential risk. I will be happy to defend this carefree attitude vigorously and at length.
This is far worse than a potential risk, DNS poisoning is not just theoretical ;-). All it takes is a one line change of /etc/hosts either on the DNS server that is used locally or even on the machine. Or setting up a transparent reverse-proxy somewhere in the chain between you and the content provider. Then all compliance checks can pass forever and ever without anyone noticing.
Indeed. These risks apply to any retrieval of information over a network. My focus has been efficient access to compliance content within an enterprise, in which such risks can be more easily addressed.
Apologies: I was unclear. I should have said "I do not subscribe to the theory that remote fetches are an unmanageable risk". I remain carefree and cheerful. Networks are difficult to do without, a prospect I do not intend to explore.
It also creates a single point of failure. What if someone takes over the domain that is used to fetch the files? What if the hosting provider is hacked? Do you think it's impossible for NIST to have any of these problems? [1]
Yes. It does.
I have recently been hoist on my own petard of customarily using NIST as a declared source of the XCCDF schemata (currently inaccessible on the NIST site). I had not implemented something like https://github.com/GaryGapinski/sacm-xml-catalog in all venues until I started receiving unpleasant notifications from schema checks. The problem has endured for some days now.
https://github.com/GaryGapinski/sacm-xml-catalog
Even if one warrants that remote fetches offer some risks, these could be obviated by providing a mode of operation for those who fear them, as has already been done for href attributes, and an obverse mode for those who embrace them.
I would dare say that people who do not fear those risks shouldn't be in this business. After all you are doing security compliance checking. If you are happy to enable external factors (your ISP, hosting provider, ...) to affect your results it makes no sense to even check.
Well, I shall regardless keep cheerfully buggering on in this business. I do not think external factors render this senseless.
The "externality" of such factors is diminished within an enterprise.
Unless one can produce one's own content, it will have to be retrieved at some point from its origin. For example, I routinely obtain the most recent updates to scap-security-guide and openscap from fedorahosted.com using git. Obviating such risk in an enterprise environment could take the form of trust in an RPM repository. Ultimately, one must choose what risks to take. Digital signatures offer integrity assurance (such as it is), which the relying party can trust as much they choose to do so.
It can also expose information about your internal infrastructure to the outside.
Yes. Assuming the information is allowed "outside".
Again, I think an enterprise could competently provide content for itself without incurring many of the risks you itemize.
However, consider consumer devices for which no "enterprise" exists. Information (e.g., firmware) must be obtained in a fashion that either obviates or adequately mitigates such risks. All such information is "external". Are not these risks simply displaced elsewhere, and how shall they be addressed in that other place?
If, in an enterprise, I have a large number of systems which I wish to perform regular compliance checking, I would like to have a single distribution point from which a definitive version of the content can be obtained. I would like all systems to use the most recent version of the content as soon as possible after any revision.
I could place it on a web server, or I could distribute it by some other means. Open SCAP currently expects it to be local to the system on which the evaluation is to take place (or perhaps it can be on a networked file system). How does it get there without incurring some or all of the same risks mentioned? How is the overall risk diminished? I will warrant that a comprehensive distribution scheme for any and all content (e.g., sofware, configuration files, security compliance checking instructions) would force such risks into a single place where they could there be addressed, rather than in more than one place.
Is this the manner in which you envision such risk being addressed? À la RPM? Using a single repository? Customary trust anchor management?
I will also (at the risk of citing a document for which I have grave misgivings) mention that NIST SP 800-126r2 §3.10 provides a method by which document integrity can be assured. Please do not read any other part of that publication. Prefer the prior version for now.
We are aware of this and efforts have been made to make this work. Unfortunately it gets very complicated very quickly. There are countless ways to sign the content and there are countless ways to check.
While their are countless ways, there are fewer effective ways, and of those a few sufficient ways. The presence of <fix> content boosts the importance of such methods.
If, as you say, there is always risk when retrieving any information over a network, digital signatures provide integrity, and encrypted channels provide confidentiality. Availability can be adjusted to match SLAs. I will warrant that some information will leak at network layer 3 (a scenario of less concern to enterprises which coerce such distribution to occur solely within the enterprise networks).
I won't bother to cover trust anchor management and all its problems at this time.
Regards,
Gary
scap-security-guide@lists.fedorahosted.org