Security Through Obscurity Is Not Bad

(mobeigi.com)

41 points | by mobeigi 2 hours ago

19 comments

  • dspillett 36 minutes ago
    Obscurity is not security.

    But it can add a bit of delay to someone breaking actual security, so maybe they'll hit the next target first as that is a touch easier. Though with the increasing automation of hole detection and exploitation, even that might stop being the case if it hasn't already.

    The biggest problem with obscurity measures IMO is psychological: people tend to assume that the measures⁰ are far more effective than they actually are, so they might make less effort to verify that the proper security is done properly.

    ----

    [0] like moving SSHd to a non-standard port¹

    [1] a solution that can inconvenience your users more than attackers, and historically (in combination with exploiting a couple of bugs) actually made certain local non-root credential scanning attacks possible if you chose a high port

  • AshamedCaptain 40 minutes ago
    The problem with this argument is that you can justify an infinite amount of crap with it, the security equivalent of cockroach papers; which inevitably people ends up treating as real security.

    One example I remember is Pidgin storing its passwords in plain text in $HOME. They could have encrypted them with some hardcoded string, and made a lot of people happy that they would no longer grep their $HOME and find their passwords right there. However this had the side effect that now people were dropping the ball and sharing their config files with others. Or forgetting to setup proper permissions for their $HOME, etc.

    In addition, these layers of obscurity are also not overhead free: they may complicate debugging, hey may introduce dangerous dependencies, they may tie you to a vendor, they may reduce computing freedom (e.g. Secure Boot), etc.

    • vlovich123 34 minutes ago
      Why a hardcoded string and not a user specific password the user used for pidgin? Then you’ve got real security and even using a password stored in the user’s keychain means that the passwords are not trivially accessible.

      The whole point of security in depth is that you use non colinear layers of protection to raise the cost of an attack and reduce the blast radius of a successful attack.

      • AshamedCaptain 31 minutes ago
        Pidgin predates keychains, but if I remember correctly you had the option to set up a master password or to simply disable storing passwords, which were the only options that were truly incrementing security. But most users would not do that (they want autologin for a reason), so the example still applies.

        (Note also most keychain implementations are not truly improving security in any way, but this is a separate topic)

    • 2OEH8eoCRo0 36 minutes ago
      > The problem with this argument is that you can justify an infinite amount of crap with it

      Does that make it wrong?

      • dspillett 34 minutes ago
        Not per se. But it does make it potentially dangerous thinking depending on how it is applied.
      • HeavyStorm 12 minutes ago
        Yes
  • thephyber 47 minutes ago
    > Security ONLY through obscurity is bad (Kerckhoffs's Principle).

    This is the crux of the article.

    (1) Kerckhoffs's Principle doesn’t say that. It says to design the system AS IF the adversary has all of the info about it except the secrets (encryption key, certificates, etc).

    (2) this rule is okay if you are a solo maintainer of a WordPress installation. It’s a problem if you work at a large company and part of the company knows the full intent of this, while the rest of the company doesn’t know the other layers of security BECAUSE of the obscurity layer. In this way, it’s important to communicate that this is only a layer and shouldn’t replace any other security decisions.

  • catoc 57 minutes ago
    “Security through obscurity” has the connotation that it is the obscurity that achieves the security - which is bad.

    ”Security including obscurity“ is fine.

    • consumer451 30 minutes ago
      Yeah, I always thought that real security is priority #1. But, using convenient obscurity lowers the obvious attack surface to things like automated scanners, just a bit.
    • justonceokay 54 minutes ago
      Yes it’s not that it’s bad, it just means you aren’t done yet
  • linsomniac 36 minutes ago
    I get what this post is saying, but I'm going to push back that "security through obscurity" isn't just something that people parrot without understanding.

    Obscurity provides, effectively, no security. There may be other benefits to the obscurity, but considering the obscurity a layer of your security is bad. I hope we all agree that moving telnet to another port provides no security (it's easily sniffable, easily fingerprintable).

    If it provides another benefit, use it, but don't think there's any security in it.

    For ~30 years I've moved my ssh to a non-standard port. It quiets down the logs nicely, people aren't always knocking on the door. But it's not a component of my security: I still disable password auth, disable root login, and only use ssh keys for access. But considering it security is undeniably bad.

    • spacemule 9 minutes ago
      I would argue moving SSH to a non-standard port is security, but it's a different kind. By reducing the noise in logs, it reduces the workload on the human or agent reviewing the logs. So, you can detect an attack in progress or respond to an attack before it gets out of hand. With SSH on a standard port, the harmful malicious logs can blend in with the annoying malicious logs much better.
    • Aurornis 22 minutes ago
      > but I'm going to push back that "security through obscurity" isn't just something that people parrot without understanding.

      I disagree on this. It's right up there with "premature optimization is the root of all evil" on the list of phrases that get parroted by a certain type of engineer who is more interested in repeating sound bites than understanding the situation.

      You can even see it throughout this comment section: Half of the top level comments were clearly written by people who didn't even read the first section of the article and are instead arguing with the headline or what they assumed the article says

    • elevation 25 minutes ago
      > But it's not a component of my security

      You may not see it as “security“, but any entity that is actively monitoring their logs benefits when the false positives decrease. If I am dealing with 800 failed login attempts per minute I cannot possibly investigate all of them. But if failed logins are rare in my environment, I may be able to investigate each one.

      Obscurity that increases the signal to noise ratio is a force multiplier for active defense.

    • vlovich123 31 minutes ago
      If port numbers were 64bit or 128bit, actually it would provide a meaningful amount of security through obscurity. Port numbers are easy to dunk on because it’s such a trivially small search space.
      • sudb 22 minutes ago
        Similarly I've often flip-flopped on the safety of public API endpoints that are "protected" by virtue of no sitemap + UUIDs in the URL path - I think the answer ultimately is that this is fine so long as there's no way to enumerate the IDs in use?
  • Bender 1 hour ago
    Security through obscurity is NOT bad.

    Security ONLY through obscurity is bad (Kerckhoffs's Principle).

    Security through obscurity, as an additional layer, is good!

    I've been saying this ever since that phrase was coined. A layer or two of obscurity keeps a lot of noise out of logs, reduces alert fatigue and cuts down on storage costs especially if one is using Splunk as their SIEM and makes targeted attacks much easier to detect. I will keep it.

    • mobeigi 1 hour ago
      Couldn't agree more, I have personally benefited from the additional layer and it irks me when people outright claim it has no value.
      • ithkuil 1 hour ago
        The informed claim is not that the obscurity layer has no value. Quite the contrary, it has such a great value that it basically reduces the incentives to have great proper security and thus once the obscurity layer is breached the second line of defense is weaker.

        The argument is that it's much easier to secure proper key material rather than design and config information that can often be leaked accidentally because it's actually directly manipulated by humans (employee onboarding, employee churn etc)

        • kstrauser 57 minutes ago
          That's an interesting way to describe it. It's kind of like the turn away from requiring regular password updates. On paper, password rotation is good. But when you consider its interaction with human psychology, the policy makes security worse by causing people to make bad decisions.
    • rcleveng 1 hour ago
      This sounds just like my thoughts on PostgreSQL's row level security. As a additional layer it's good, as the only thing, watch out!
    • bee_rider 39 minutes ago
      It would be nice if there was no overlap between terms for the operational things that help improve security (log reduction and other non-cryptographic methods of reducing admin fatigue), and the mathematical cryptographic characteristics of the system.

      If the focus is on the latter, obscurity buys you nothing and adds complexity/distraction, which is bad. The former can be important though.

    • tokai 33 minutes ago
      >I've been saying this ever since that phrase was coined

      You have been alive since the 1880s?

  • caminante 46 minutes ago
    Regarding Counterstrike (game) example, there were already a lot of cheaters and a cheater ecosystem that still exists to this day. I suspect Valve could address it if it wanted to, but the gameplay/development cost trade-offs aren't enough.

    Valve pivoted to server-side anti-cheat and toleration because someone probably did the math on max(profit) with lootboxes.

    • mobeigi 38 minutes ago
      Valve's VACnet solution is definitely interesting. It uses AI, deep learning and is server side. It's hard to tell how effective that has been for them compared to traditional client side detection systems; I don't imagine they'll share any results.

      The fact that it's completely hidden from cheat developers gives them a huge advantage though. In the past, any client side algorithm or detection method could be reversed engineered by cheat developers and patched before lunch time. Now they're working against Valve completely in the dark.

  • nobrains 26 minutes ago
    My take: Do proper security, but if you are short on time or resources, you can start with security through obscurity, to block a few percentage of attacks, and then when you have time and resources, go ahead and add the proper security measures.
  • majorchord 39 minutes ago
    Couldn't one argue that a password is also obscurity? It's only secure until someone figures it out, just like a secret URL on a website.
  • josalhor 36 minutes ago
    I want to add that Obscurity is ambiguous. Is changing the port of SSH "obscurity"? Some may say yes, because you could find it by bruteforce. But a password with infinite attempts can also be bruteforced. Here, the defining factor of security is maximum number of attempts (either on ports, username or whatever).
  • INTPenis 59 minutes ago
    I've been saying for years, it's one layer of security. That's undeniable.
    • Latty 53 minutes ago
      I'll push back on this: obscurity isn't a "free" layer of security, it has both security benefits and security costs.

      By having obscurity you lose anther layer of security: public scrutiny. It's harder for security issues to remain if people can see them and point them out, more eyes mean more chances to catch problems.

      There is also a cultural component: having to lay out what you are doing publicly means you can't just think "no one will know", and let something slide, which pushes you towards better security practices.

      Of course, this doesn't mean obscurity is always going to be the worse choice, there are times it will offer more than it costs and it's particularly evident that in, for example, open source projects, a lot of the time the number of eyes on most code is low enough that "many eyes" is a bit misleading, but I think presenting it as a pure positive is wrong, obscurity has cost, even if you think it's worth it in some cases.

  • dwa3592 37 minutes ago
    Security which has layers of obscurity can be incredibly powerful especially if you believe in counter intelligence. You want attackers to find the wrong key sometimes because it will lead to you collecting intelligence on them. But this increases the cost in time and infrastructure.
  • fortran77 54 minutes ago
    Wordpress is a great example. He cites

    > There is a long-standing security recommendation to change WordPress's default database table prefix to a random one. For example, wp_users becomes wp_8df7b8_users. This is often dismissed as "worthless" because it is security through obscurity.

    I found that just changing the default URL for the wordpress login from the usual wp-admin to anything reduces by several orders of magnitude the number of scripts that try your site for the most common vulnerabilities---something that happens constantly for any site on the web, once a minute or so.

    • Fnoord 33 minutes ago
      Security through obscurity isn't security. It could be a method to reduce noise, but by doing so, you also have less eyes to watch over. If you'd pay for a blackbox pentest, and the pentester doesn't find your OpenSSH server running on a different port, then that doesn't tell you anything about the security of your OpenSSH server. In a whitebox pentest, they'd know about it beforehand. So, do you want to test the security of your OpenSSH server, yes or no?

      There's a very simple method to reduce spam in OpenSSH server logs: whitelist IPs of those who require access (could be ranges, too), and centralize over a jumphost. And something like Shodan (and friends) would find your OpenSSH server running on a different port anyway. But it wouldn't find it if you were using whitelisting of IPs of those who require access. There is, for example, no valid reason that people in China or Russia need to connect to your OpenSSH server. Why allow them to? Don't. I don't allow traffic from any IPs allocated to China or Russia, among a couple of other countries, and I don't feel like I am missing out.

      Another one is port knocking. Anyone who has read access over the network between client and server can figure out the port knocking process, including a hostile actor who does a MITM (with for example a rogue WiFi AP).

      So what happens is improper security (security through obscurity) means people don't apply real security measures (such as IP whitelisting). And that is why security through obscurity is bad.

      As for Wordpress, the default settings and default Wordpress is quite secure these days (have been this way for at least 10 years). It is all the bells and whistles in the form of addons which are the culprit.

    • kortex 27 minutes ago
      This should be immediately intuitive to anyone who spends more than 5 minutes looking at firewall traffic of something public. 99.9% of the bots' requests aren't doing sophisticated penetration attacks, they are blasting all the low hanging fruit: the common ports, the common wordpress endpoints, the common bobby tables style sql injections and xss attacks.
    • pants2 44 minutes ago
      Nice. If you do the opposite of what WordPress does for security you're probably on the right track.
    • i_think_so 40 minutes ago
      Same thing as changing your ssh port to something random. It's a trade-off with the convenience of knowing that all of your servers are listening on port 22 and you won't need any customizations in scripts or whatnot. But there are ways to mitigate much of that.

      On the benefit side, mitigating most of the computational load, log analysis load, how much are the baddies poking me while I sleep load, etc...all of these together make changing such defaults a slam dunk IMO.

  • CM30 38 minutes ago
    Yeah, security through obscurity as part of securing a system is good. Security through obscurity as the only way of securing a system is not.

    Like, a lot of it comes down to 'high friction' vs 'low friction'. Obscurity means high friction. It means that the attacker needs to craft a specific solution for your site or system in particular rather than relying on an off-the-shelf solution to handle it all for them.

    For example, the article's point about changing the WordPress database prefix fits into this category perfectly. Does it really make things that much more 'secure'? No, of course not. But it does mean that automated scripts that just assume tables like wp_posts exist will fail. It means that an attacker can't just run any old WordPress hacking toolkit and watch it do its thing, they have to figure out what database prefix you're using first.

    Same with antispam solutions. The best solution to stop spam is to make your site unique in some way. To add some sort of challenge that a new user has to overcome to use the site, like a question related to the topic, a honeypot field they can't fill in, a script that detects how quickly they register, etc.

    This won't stop a determined spammer, but it will stop or delay bots and automated scripts that rely on the target system having the same behaviour across the board. The spammer has to specifically target your site in particular, not just every forum script running the same software.

    And much of society works this way to a degree. A federated or decentralised system (whether a social network or political movement) isn't technically harder to attack than a centralised one might be.

    But it is more work to attack it. If a government or company wants to censor Reddit or Discord or YouTube, they have one target they can force to censor information across the board. If they want to target the Fediverse or some sort of torrent based system, then they have to track down dozens of people and deal with at least some of those people refusing or taking it to court or being in countries that aren't under their control or whatever else.

    That's kinda what a good security through obscurity setup can be. You can't mass target everyone at once, you have to target different systems individually and spend more time and resources in the process.

    However, you still need real security measures there. Security through obscurity is like hiding a safe behind a painting. It'll stop casual attackers from finding it, but it won't stop a targeted attack on its own. You need a strong lock, materials that are difficult to drill through and the safe itself being difficult to remove from the wall too.

  • MagicMoonlight 24 minutes ago
    Just because you have a bunker, doesn’t mean you hand the enemy the plans.
  • locallost 30 minutes ago
    It's useless for the example given because obfuscating JavaScript as protection no longer has any purpose, if you can let AI analyze the code, and/or in this case the API requests.

    I recently did use a variation of this type of security to prevent a malicious user misusing our services... But I made a not to me an everyone else it was just a quick fix not guaranteed to work long term.

  • i_think_so 32 minutes ago
    I have always replied to colleagues who poohpoohed "security through obscurity!" as if it was proof of ignorance or bad culture with "a password is just a string of obscure characters. ;-)"

    That's not a serious argument, of course. But consider how the spooks operate in the field. They employ all manner of obscure practices in an attempt to improve their security. Their intentional obscurity (AFAIK) is never allowed to unnecessarily complicate operational practices, which would introduce risk. And they've probably got a lot more theory and no-BS field testing behind their practices than we do.

    Maybe we should ask them for advice?

  • perching_aix 43 minutes ago
    Cryptography is "just" a mathematically sophisticated version of manufacturing obscurity, so that's missing the point a bit. Obscurity is just information asymmetry, which is the only way we have to "secure" anything. That quote is about all the other forms of manufactured obscurity not being anywhere near as rigorous, which should be obvious.
    • jrmg 28 minutes ago
      Don’t like that you’re getting downvoted here! This is a pet peeve of mine. All security is ‘security through obscurity’ when you get right down to it.

      Cryptography is just a collection of ‘obscure’ keys (and, arguably, algorithms) that someone nefarious has to guess or work out - or social engineer out of someone - to access data. They’re just really hard to guess or work out.

    • kortex 31 minutes ago
      Eh, the problem with that reasoning is one of extreme degree. The "obscurity metric" would be the surprisal associated with discovering the critical piece of info. Using a random port confers brute force resistance of 2^16. At 1ms that's about a minute. Brute forcing at the same rate a 128 bit key takes like 10^28 years.

      It's like hiding your key under the mat, vs hanging on a tree limb of a specific tree only you know the gps coordinate of. Both are "obscure". Huge difference in difficulty.