Rules like https://cisofy.com/lynis/controls/HRDN-7222/ make me think the whole thing is snake oil. There is zero security benefit to making publicly-available compilers not be world-readable.
I assume you don't work in security. The "HRDN" means it's a Hardening rule, and hardening is the action of reducing the attack surface for possible attacks as much as you can, even for the most crazy types, like a normal user or malware having access to download an exploit from exploit-db.com and being able to compile it without being root.
Unless you also mount some partitions noexec, making things not executable is useless. And if you have access to python/perl/ruby, you can construct any binary in memory anyway. And that's assuming someone's targeting some vulnerability chain which uses the compiler which is a stretch anyway.
I've noticed that many ineffective and damaging security policies (mandating crowdstrike, increasingly arcane password requirements etc.) that businesses adopt seem to be implemented for "compliance" with ... what exactly? Sets of rules and regulations, apparently written by people who don't understand security, don't care about system reliability, availability, or usability, or have a business interest in dubious security solutions.
Requirement 2.2.1 says: "Configuration standards are developed, implemented, and maintained to <...> Be consistent with industry-accepted system hardening standards or vendor hardening recommendations."
Then in the third column, it mentions explicitly: "Sources for guidance on configuration standards include but are not limited to: Center for Internet
Security (CIS), International Organization for Standardization (ISO), National Institute of Standards and Technology (NIST), Cloud Security Alliance, and product vendors."
CIS, at least in the past, was a significant source of overzealous pseudo-hardening. Yet, that's what auditors' automated tools check compliance with, as that's the only configuration standard with a written procedure, often a command that can be copy-pasted, to check compliance with each rule. And I am not allowed to object to the recommendations or not follow the "best practices" because otherwise the next breach will be fully on me (in financial terms).
I just heard about this tool but someone else said it simply enumerates defaults already present in most distros.
I can tell you one thing that makes real changes to RHEL at least, CIS Benchmark. It hardens your system by tightening up file permissions, user logins, disables old protocols, sets partition flags and more.
But the best hardening imho doesn't follow any set standard, rather application dependent isolation using containers and MACs like SElinux and MCS (multi-category security).
That is also why Lynis does not follow a specific set, but applies generic principles from multiple sources. Yes, some of the items may be default (now) in Linux distributions, but often they are still aren't. For example, most systemd services definitely can use more strict defaults. The distribution is typically not making the changes, to avoid breaking things for the end-user. This is where Lynis comes in, being independent of any big commercial organization (yes, looking at you Red Hat). While working on Lynis for 17 years now, I can say some things definitely improved in Linux distributions, but still so many things that could be much better secured out-of-the-box.
CIS itself may have good ideas, but the implementation is mostly bullshit. Compare the actual differences between a CIS Ubuntu docker images and a plain one. There's 3 valid changes you can do by hand and the rest is snake oil that makes the image larger as a bonus.
I wouldn't say it's mostly BS, it's mostly common sense stuff that distros should have done already.
I don't know about the Ubuntu CIS image but I had to go through the whole CIS PDF for a job once, and implement it all with Ansible on RHEL. I can guarantee that it makes useful changes, and it truly makes a difference to how you use the system.
But in general this type of hardening is mostly used to fulfill some contract, and it's designed around how Linux was used 20 years ago.
My personal preference is to 1) treat linux servers as appliances and stop letting people login, 2) use containers, MACs, MCS and other such isolation tailored for specific services, 3) network ACL and segmentation up the wazoo, 4) MFA access control and 5) encrypt all the things.
Doesn't offer much utility IMO as most distributions come with secure defaults ootb these days. Unfortunately it's checklist is not thorough enough to keep you ahead of the security curve.
Lynis author here. While some defaults definitely became better, often due to the kernel itself being better protected, there is still a lot of room for improvement. The distribution often can't make things too strict, to prevent common issues. Keep also in mind that it is not just the OS itself, but especially the parts that get added over time (users, software, configuration file changes) that introduce the biggest flaws. The aim of Lynis is to do a regular health check, giving the sysadmin the chance to tighten things where needed or correct those things that got out of spec.
We are looking for something to run as part of our ami/docker testing and as you say, stays fresh on standards (whatever soc2/iso, but ideally also FIPS) , any prefs?
I use it for regular scanning, flagging potential issues, automatically making changes, aligning images to CIS Level 2, and for ongoing scanning to satisfy SOC2 auditors.
It's closer to checkbox compliance, rather than being effective. Sure, those checks may be interesting and point out some actual issues. But if you're given a choice, then a short threat modelling session will have much higher impact. Someone else brought up CIS here - it's the same category with counterproductive changes like installing an integrity checker and tcpwrapper inside docker images.
If auditors are going to use this, it would benefit even the most competent sysadmin to know what it's gonna say. The average compliance analyst isn't going to understand why some enumerable risk isn't actually a threat because; your threat model makes said issue actually impossible. Even if you can prove it, they're still gonna include it in their needless risk findings. I'd postulate (for fun) that most competent sysadmins would be more likely to have that problem, because they've already identified it, and are using it as a makeshift 'honeypot'.
That is exactly why Lynis was created, to make it easier for both a sysadmin and auditor to validate things. At the same time, not every system needs the security level of a bank, so that is why it provides suggestions. Is something too strict for your needs? No problem, just disable the test. What I learned is that often auditors and system administrators like it when they an independent tool that helps to set some middle ground. The sysadmin benefits from a validation tool, while the auditor benefits from the fact that the sysadmin has the ability to validate their systems. IMHO that is better than auditors who force companies to use CIS benchmarks, simply because that is what they found and think was a good idea. Lynis does not enforce things, but allows both the sysadmin and auditor to implement things along the risk level and risk appetite. Disclaimer: I'm the author of the tool.
Rules like https://cisofy.com/lynis/controls/HRDN-7222/ make me think the whole thing is snake oil. There is zero security benefit to making publicly-available compilers not be world-readable.
> There is zero security benefit
I assume you don't work in security. The "HRDN" means it's a Hardening rule, and hardening is the action of reducing the attack surface for possible attacks as much as you can, even for the most crazy types, like a normal user or malware having access to download an exploit from exploit-db.com and being able to compile it without being root.
Preventing the compilation of code by arbitrary users is not harmful and reduces your attack surface.
Where does it say on that page that the hardening is not making them world-readable?
> If a compiler is found, execution should be limited to authorized users only (e.g. root user).
Unless you also mount some partitions noexec, making things not executable is useless. And if you have access to python/perl/ruby, you can construct any binary in memory anyway. And that's assuming someone's targeting some vulnerability chain which uses the compiler which is a stretch anyway.
Rules like https://cisofy.com/lynis/controls/AUTH-9282/ are something that NIST calls outdated and dangerous password practice, but foreign security bodies mandate. Go figure.
Also, the suggestion from https://cisofy.com/lynis/controls/NAME-4404/ is just wrong on systems with nss_myhostname (from systemd) configured.
I've noticed that many ineffective and damaging security policies (mandating crowdstrike, increasingly arcane password requirements etc.) that businesses adopt seem to be implemented for "compliance" with ... what exactly? Sets of rules and regulations, apparently written by people who don't understand security, don't care about system reliability, availability, or usability, or have a business interest in dubious security solutions.
It's a chain. Take PCI DSS v4.0 for example.
Requirement 2.2.1 says: "Configuration standards are developed, implemented, and maintained to <...> Be consistent with industry-accepted system hardening standards or vendor hardening recommendations."
Then in the third column, it mentions explicitly: "Sources for guidance on configuration standards include but are not limited to: Center for Internet Security (CIS), International Organization for Standardization (ISO), National Institute of Standards and Technology (NIST), Cloud Security Alliance, and product vendors."
CIS, at least in the past, was a significant source of overzealous pseudo-hardening. Yet, that's what auditors' automated tools check compliance with, as that's the only configuration standard with a written procedure, often a command that can be copy-pasted, to check compliance with each rule. And I am not allowed to object to the recommendations or not follow the "best practices" because otherwise the next breach will be fully on me (in financial terms).
Seems like a good thing. Anyone here has experience with this tool?
I just heard about this tool but someone else said it simply enumerates defaults already present in most distros.
I can tell you one thing that makes real changes to RHEL at least, CIS Benchmark. It hardens your system by tightening up file permissions, user logins, disables old protocols, sets partition flags and more.
But the best hardening imho doesn't follow any set standard, rather application dependent isolation using containers and MACs like SElinux and MCS (multi-category security).
https://docs.redhat.com/en/documentation/red_hat_enterprise_...
That is also why Lynis does not follow a specific set, but applies generic principles from multiple sources. Yes, some of the items may be default (now) in Linux distributions, but often they are still aren't. For example, most systemd services definitely can use more strict defaults. The distribution is typically not making the changes, to avoid breaking things for the end-user. This is where Lynis comes in, being independent of any big commercial organization (yes, looking at you Red Hat). While working on Lynis for 17 years now, I can say some things definitely improved in Linux distributions, but still so many things that could be much better secured out-of-the-box.
CIS itself may have good ideas, but the implementation is mostly bullshit. Compare the actual differences between a CIS Ubuntu docker images and a plain one. There's 3 valid changes you can do by hand and the rest is snake oil that makes the image larger as a bonus.
I wouldn't say it's mostly BS, it's mostly common sense stuff that distros should have done already.
I don't know about the Ubuntu CIS image but I had to go through the whole CIS PDF for a job once, and implement it all with Ansible on RHEL. I can guarantee that it makes useful changes, and it truly makes a difference to how you use the system.
But in general this type of hardening is mostly used to fulfill some contract, and it's designed around how Linux was used 20 years ago.
My personal preference is to 1) treat linux servers as appliances and stop letting people login, 2) use containers, MACs, MCS and other such isolation tailored for specific services, 3) network ACL and segmentation up the wazoo, 4) MFA access control and 5) encrypt all the things.
Doesn't offer much utility IMO as most distributions come with secure defaults ootb these days. Unfortunately it's checklist is not thorough enough to keep you ahead of the security curve.
Lynis author here. While some defaults definitely became better, often due to the kernel itself being better protected, there is still a lot of room for improvement. The distribution often can't make things too strict, to prevent common issues. Keep also in mind that it is not just the OS itself, but especially the parts that get added over time (users, software, configuration file changes) that introduce the biggest flaws. The aim of Lynis is to do a regular health check, giving the sysadmin the chance to tighten things where needed or correct those things that got out of spec.
We are looking for something to run as part of our ami/docker testing and as you say, stays fresh on standards (whatever soc2/iso, but ideally also FIPS) , any prefs?
This is great https://github.com/ComplianceAsCode/content
I use it for regular scanning, flagging potential issues, automatically making changes, aligning images to CIS Level 2, and for ongoing scanning to satisfy SOC2 auditors.
It's closer to checkbox compliance, rather than being effective. Sure, those checks may be interesting and point out some actual issues. But if you're given a choice, then a short threat modelling session will have much higher impact. Someone else brought up CIS here - it's the same category with counterproductive changes like installing an integrity checker and tcpwrapper inside docker images.
Useful if you walk in to an unknown environment, however if standing up your own infra, any competent sysadmin doesn't need this.
If auditors are going to use this, it would benefit even the most competent sysadmin to know what it's gonna say. The average compliance analyst isn't going to understand why some enumerable risk isn't actually a threat because; your threat model makes said issue actually impossible. Even if you can prove it, they're still gonna include it in their needless risk findings. I'd postulate (for fun) that most competent sysadmins would be more likely to have that problem, because they've already identified it, and are using it as a makeshift 'honeypot'.
That is exactly why Lynis was created, to make it easier for both a sysadmin and auditor to validate things. At the same time, not every system needs the security level of a bank, so that is why it provides suggestions. Is something too strict for your needs? No problem, just disable the test. What I learned is that often auditors and system administrators like it when they an independent tool that helps to set some middle ground. The sysadmin benefits from a validation tool, while the auditor benefits from the fact that the sysadmin has the ability to validate their systems. IMHO that is better than auditors who force companies to use CIS benchmarks, simply because that is what they found and think was a good idea. Lynis does not enforce things, but allows both the sysadmin and auditor to implement things along the risk level and risk appetite. Disclaimer: I'm the author of the tool.