On Tue, Mar 17, 2020 at 12:25 PM Jakub Hrozek jhrozek@redhat.com wrote:
On Tue, Mar 17, 2020 at 12:03:16PM +0100, Marek Haicman wrote:
Hello Jakub, thank you for the question - if I understand correctly, here are the scenarios that you anticipate:
- RHCOS (standard checks - there is currently no other checking of RHCOS
outside OCP, AFAIK)
- kubelet on RHCOS
^^^^^^
btw the kubelet check was just something I used as an example of a check that is specific to OCP/k8s, will be implemented with the YAML probe so it's sort of outside the usual OS-level checks. It's not the only one, but an example.
Oh yeah, I have used "kubelet checks" as a generic term for OCP specific checks :)
- RHEL8 standard checks
- kubelet checks on RHEL8
- RHEL7 standard checks
- kubelet checks on RHEL7
Now what are the options:
- have everything in OCP4
- RHCOS in OCP4, and rest in respective products, with profiles
containing
both FedRAMP Moderate and OCP specific checks
FedRAMP moderate is a standard expressed with a profile here, right?. Did you mean OS and OCP specific checks?
Yes, both OS and OCP specific checks
- standard checks in respective products, and kubelet checks in separate
application "product"
- is most self contained, only one data stream as a result and simpler
way
how to trigger scan - you just scan, and content knows what is applicable and what's not. At the same time, the content itself will be complex, and will live independently on the OS product development.
The advantage I see here is that the maintenance complexity decreases. But I agree we might be just shifting the maintenance cost from profiles to rules.
You want to keep complexity away from OVAL, I'd say.
- operator needs to know what is being scanned, to apply correct data
stream (as there are three of those).
It needs to know what is being scanned anyway. You provide the content profile in a scan definition. And the scan is per-pool, so per a set of machines, so you'd have something like this:
apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: example-compliancesuite spec: scans: - name: rhel-workers-scan profile: foo-bar-baz contentImage: bar-baz-foo nodeSelector: node-role.kubernetes.io/rhel-worker: "" - name: rhcos-workers-scan profile: foo-bar-baz contentImage: bar-baz-foo nodeSelector: node-role.kubernetes.io/rhcos-worker: ""
Unless we go for option 1, the profile and contentImage would differ either way for each scan.
Results are still complete per node. Complexity is not so high, but every product contains parts that are dependent on kubelet checks - at least three places to record every
change
in the profile. 3. operator scans each node twice - once for OS standard check, second
time
for the kubelet checks. This creates more complex result aggregation, but all (four) pieces are developed just once.
So we're going to have two scans either way: One that checks the node-level rules and another one that checks the cluster-level rules such as "Is log forwarding enabled for the cluster?". Do I read it correctly that you propose to split the node-level scans further to "Linux-specific" checks and "kubernetes client-specific" checks?
To be honest, I'm not sure if the versions of the agents or other kubernetes-specific stuff on the nodes would be the same in the cluster here, IOW if we can guarantee that the checks would be the same on the cluster. I suspect they will, though..
But the thing that I personally dislike here is that you would either have to define two scans per a machine pool, or the scan would have to be able to consume two contents and launch the scans internally. Or the operator would have to be able to deduce the kubernetes-level content from the os-level content.
This approach would be palatable if the k8s-level checks would be the same across the cluster.
In case the k8s-level checks differ between OS versions, I'd say it's still better to keep it together (and have a bit more complex OVAL, or k8s profile) in separate content, but updated when kubernetes updates, instead of having bits and pieces spread everywhere. 2nd scenario means update of OCP would result in need to update all content we have, because if would contain outdated pieces.
From my perspective, option 3 is the cleanest from the ComplianceAsCode project perspective. It moves some of the complexity to the operator, so the CaC does not have to bloat. And honestly - complexity is something we need to keep as low as possible if we want to move forward. Looking
forward
to hear different perspectives.
Regards, Marek
On Tue, Mar 17, 2020 at 11:23 AM Jakub Hrozek jhrozek@redhat.com
wrote:
Hi,
at the moment, we have a single content for OCP4:
https://github.com/ComplianceAsCode/content/blob/master/ocp4/profiles/modera...
all the rules currently are targetting RHCOS as the YAML probe usage is still being integrated into the compliance operator. But even at the OS or cluster node level, the rules are only applicable to RHCOS or RHEL-8 with the help of rules from this file:
https://github.com/ComplianceAsCode/content/blob/master/shared/checks/oval/i...
However, our customers would also run UPI-provisioned (User Provided Infrastructure provisioned) clusters with RHEL-7 or RHEL-8 as workers. Master nodes, or the control plane nodes, can only run RHCOS as the OS.
So the question is how do we go about the content? On one hand, even though RHCOS is sort-of-kind-of RHEL-8, the rules for RHCOS and RHEL-8 might differ. As an example, the way to install a required package
would
be different, on RHCOS you would have used rpm-ostree, but on RHEL-8
you
would have used yum. Coming from a different angle, I don't think we
can
reuse the RHEL content either, because some rules are not applicable in the OCP content even though the OS is vanilla RHEL. As an example, you wouldn't configure bind for DNS in OCP, but you would use CoreDNS.
Even though I don't know what the YAML checks would look like, I assume that there might also be checks about e.g. kubelet configuration (kubelet is the node agent that runs on each node in the cluster) that need to be run and evaluated on all nodes in the cluster, regardless of the node OS.
With that in mind, what options do we have to deliver content that
would
both be applicable across RHCOS, RHEL-8 and RHEL-7 but also with the
OCP
use-cases in mind? Should we fork the contents for each OS or try to
reuse
all the rules in single content? How would reusing rules work in practice? _______________________________________________ scap-security-guide mailing list -- scap-security-guide@lists.fedorahosted.org To unsubscribe send an email to scap-security-guide-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:
https://lists.fedorahosted.org/archives/list/scap-security-guide@lists.fedor...
scap-security-guide mailing list --
scap-security-guide@lists.fedorahosted.org
To unsubscribe send an email to
scap-security-guide-leave@lists.fedorahosted.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:
https://lists.fedorahosted.org/archives/list/scap-security-guide@lists.fedor... _______________________________________________ scap-security-guide mailing list -- scap-security-guide@lists.fedorahosted.org To unsubscribe send an email to scap-security-guide-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/scap-security-guide@lists.fedor...