Linux CVEs, more than you ever wanted to know

(kroah.com)

92 points | by voxadam 1 day ago

9 comments

  • pedrozieg 14 hours ago
    CVE counts are such a good example of “what’s easy to measure becomes the metric”. The moment Linux became a CNA and started issuing its own CVEs at scale, it was inevitable that dashboards would start showing “Linux #1 in vulnerabilities” without realizing that what changed was the paperwork, not suddenly worse code. A mature process with maintainers who actually file CVEs for real bugs looks “less secure” than a project that quietly ships fixes and never bothers with the bureaucracy.

    If Greg ends up documenting the tooling and workflow in detail, I hope people copy it rather than the vanity scoring. For anyone running Linux in production, the useful question is “how do I consume linux-cve-announce and map it to my kernels and threat model”, not “is the CVE counter going up”. Treat CVEs like a structured changelog feed, not a leaderboard.

    • Sohcahtoa82 7 hours ago
      The problem I have is that the hyperfixation on CVE counts has turned the entire vulnerability management industry into Boy-Who-Cried-Wolf-as-a-Service.

      99% of CVEs are essentially unexploitable in practice. If you're just concerned about securing your web apps and don't use WordPress, then the number of CVEs produced per year that you actually have to worry about is in the single digits and possibly even zero, yet Wiz will really love to tell you about hundreds of CVEs living in your environment because it's been a month since you ran "apt upgrade".

    • elric 9 hours ago
      I recently attended a security training where the trainer had a slide showing how Linux has more CVEs per year than Windows. He used this as an argument that Linux is less secure than Windows. People lacking basic knowledge about statistics remains a problem. Sigh.
      • some_random 8 hours ago
        Unfortunately the security community is filled to the brim with incompetent schlubs chasing a paycheck and many of them find their place as trainers. Those who can't do, teach.
    • im3w1l 10 hours ago
      Well consider this: Two projects with the same amount of actual security issues. one project is willing to say "this bug doesn't affect security" and is willing to take accountability for that statement. Another project is not willing to do so. As a result the former has a lower count and the other a higher count. Which is better for a user valuing security?

      As the actual number of issues is the same you might say it doesn't matter, but I don't agree. As a user it is easier to deal with "here are the n issues", than "here are m things any n of which are real".

  • some_random 7 hours ago
    The point of the CVE system is to alert downstream users of security bugs, and giving Linux their own CNA has resulted in a deluge of reports to end downstream users of bugs that are ultimately not security related and in that respect Greg et al have completely failed.
  • throw329084 1 day ago
    This blog post, brought to you by the man who wants to burn down the CVE system https://lwn.net/Articles/1049140/
    • accelbred 18 hours ago
      I, this last week, had to spend hours dealing with a fake CVE that was opened 2 years ago on an open source dependency of our project for a bug that amounts to "if you have RCE, you can construct a malicious java datatype and call this function on it to trigger a stack overflow". The github thread on the lib is full of the maintainers having to deal with hundreds of people asking them for updates on an obviously fake CVE. Yet the CVE is still up and has not been deleted. And I now get a request from a customer about fixing this vuln in our code their CVE scanner found.

      The CVE system is broken and its death would be a good riddance.

      • some_random 8 hours ago
        The CVE system isn't great but it's all we have and demanding its destruction because a CNA didn't do their job (just like the Linux CNA, I might add) is childish.
        • DeepYogurt 6 hours ago
          osv.dev exists and is worlds better
    • TheDong 19 hours ago
      One of the many people who know the CVE system is elaborately broken in many ways.

      Please, tell me what issues you have with how the kernel does CVEs.

      • raesene9 16 hours ago
        Not op but if you are looking for information on why sone people arent keen on the kernels approach to CVE management https://jericho.blog/2024/02/26/the-linux-cna-red-flags-sinc... might be of interest
      • thyristan 9 hours ago
        All the issues basically boil down to "nobody wants to do the busywork of CVE filtering, triage, rejections, changes".

        As a developer, kernel or otherwise, you get pestered by CVE hunters who create tons of CVE slop, wanting a CVE on their resume for any old crash, null pointer deref, out of bounds read or imaginary problem some automated scanner found. If you don't have your own CNA, the CVE will get assigned without any meaningful checking. Then, as a developer, you are fucked: Usually getting an invalid CVE withdrawn is an arduous process, taking up valuable time. Getting stuff like vulnerability assessments changed is even more annoying, basically you can't, because somebody looked into their magic 8ball and decided that some random crash must certainly be indicative of some preauth RCE. Users will then make things worse by pestering you about all those bogus CVEs.

        So then you will first try to do the good and responsible thing: Try to establish your own criteria as to what a CVE is. You define your desired security properties, e.g. by saying "availability isn't a goal, so DoS is out of scope", "physical attacker access is not assumed". Then you have criteria by which to classify bugs as security-relevant or not. Then you do the classification work. But all that only helps if you are your own CNA, otherwise you will still get CVE slop you cannot get rid of.

        Now imagine you are an operating system developer, things get even worse here: Since commonly an operating system is multi-purpose, you can't easily define an operating environment and desired security properties. E.g. many kiosk systems will have physical attackers present, plugging in malicious hardware. Linux will run on those. E.g. many systems will have availability requirements, so DoS can no longer be out of scope. Linux will run on those. Hardware configurations can be broken, weird, stupid and old. Linux will run on those. So now there are two choices: Either you severely restrict the "supported" configurations of your operating system, making it no longer multi-purpose. This is the choice of many commercial vendors, with ridiculous restrictions like "we are EAL4+ secure, but only if you unplug the network" or "yeah, but only opensshd may run as a network service, nothing else". Or you accept that there are things people will do with Linux that you couldn't even conceive of when writing your part of the code and introducing or triaging the bug. The Linux devs went with the latter, accept that all things that are possible will be done at some point. But this means that any kind of bug will almost always have security implications in some configuration you haven't even thought of.

        That weird USB device bug that reads some register wrong? Well, that might be physically exploitable. That harmless-looking logspam bug? Will fill up the disk and slow down other logging, so denial of service. That privilege escalation from root to kernel? No, this isn't "essentially the same privilege level so not an attack" if you are using SElinux and signed modules like RedHat derivatives do. Since enforcing privileges and security barriers is the most essential job of an operating system, bugs without a possible security impact are rare.

        Now seen from the perspective of some corporate security officer, blue team or dev ops sysadmin guy, that's of course inconvenient: There is always only a small number of configurations they care about. Building webserver has different requirements and necessary security properties than building a car. Or a heart-lung-machine. Or a rocket. For their own specific environment, they would actually have to read all the CVEs with those requirements in mind, and evaluate each and every CVE for the specific impact on their environment. Now in those circles, there is the illusion that this should be done by the software vendors, because otherwise it would be a whole lot of work. But guess what? Vendors either restrict their scope so severely that their assessment is useless except for very few users. Or vendors are powerless because they cannot know your environment, and there are too many to assess them all.

        So IMHO: All the whining about the kernel people doing CVE wrong is actually the admission that the whiners are doing CVE wrong. They don't want to do the legwork of proper triage. But actually, they are the only ones who realistically can triage, because nobody else knows their environment.

        • SAI_Peregrinus 9 hours ago
          The CVE system would be greatly improved by a "PoC or GTFO" policy. CVSS would still be trash, but it'd simplify triage to two steps: "is there a proof-of-concept?" and "does it actually work?". Some real vulns would get missed, but the signal:noise ratio of the current system causes real vulns to get missed today so I suspect it'd be a net improvement to security.
          • thyristan 3 hours ago
            Maybe.

            But you cannot PoC most hardware and driver related bugs without lots of time and money. Race conditions are very hard to PoC, especially if you need the PoC to actually work on more than one machine.

            So while a PoC exploit does mean that a bug is worthy of a CVE, the opposite isn't true. One would overlook tons of security problems just because the discoverer of them wasn't able to get a PoC working. But it could be worth it, to maybe also keep the slop out.

    • DeepYogurt 21 hours ago
      To be fair the CVE system can't even encode a version string
      • spockz 15 hours ago
        Not sure whether this is a limitation of the scanning tooling or of the CVE format itself, it also cannot express sub packages. So if some Jackson-very-specific-module has a CVE the whole of Jackson gets marked as impacted. Same with netty.
  • 1vuio0pswjnm7 20 hours ago
    • 1vuio0pswjnm7 7 hours ago
      I dont use a popular browser to make TCP/UDP connections or HTTP requests over the internet

      This group

      https://cabforum.org/about/membership/members/

      has no control over the software I use

      I believe I can do better checks on who "controls" a domain name than Let's Encrypt. If I am the CA then I dont "trust" ad/tracking servers. But popular browsers do. Third party CAs are happy to take money from the people behind the data collection, surveillance and ad services that have ruined the web

      I dont find anti-HTTP commentary any more convincing than anti-HTTPS commentary. Each user is different and is free to choose. Each is free to make their own decisions under whatever their own circumstances

      For many years, cr.yp.to was HTTP-only

      Popular browsers, TLS libraries and "Certificate Authorities" make heavy use of cryptography developed by the author of that site

      Generally anyone who uses Linux makes use of software developed by the author of this blog post

      Anyway, Tor is another TLS option besides using an archive

    • styanax 10 hours ago
      I'm mildly surprised GKH doesn't deploy SSL. In this day and age I just close the browser window when the http-only browser warning comes up and move on to something else.
  • letmetweakit 11 hours ago
    shameless self promotion: I just launched a website [1] that tracks CVEs per kernel version since 2.6.12, it makes use of the tools that Greg KH will probably talk about in his next blog posts.

    [1] https://www.kernelcve.com

  • paulryanrogers 1 day ago
    Looking forward to posts links in the series. This seems like a bit of a tease.
    • dredmorbius 1 day ago
      2nd 'graph of TFA links five talks on the topic all within the past two years.
      • paulryanrogers 21 hours ago
        Perhaps I misunderstand, but aren't those far above the "So here’s a series of posts" and its bullet list?
        • dredmorbius 9 hours ago
          Fair point, and it seems that there is now a post in that series included.

          Greg KH may be editing-in-place, possibly with a public statement as a goad to himself to deliver on his promise.

  • crest 12 hours ago
    Is there a good public resource to figure out which parts of the kernel are the worst offenders e.g. is it a DoS in a driver for some ancient 8 bit ISA card or a remote code execution via ICMP echo requests?
  • loph 1 day ago
    [flagged]
    • tomhow 1 day ago
      Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

      https://news.ycombinator.com/newsguidelines.html

    • landr0id 1 day ago
      I'm surprised Firefox didn't warn me when I went to the page. Hostile teleco/MITM waiting for HTTP traffic are a real-world way that nation states deliver exploits.
      • vpShane 23 hours ago
        It did for Librewolf -- what I moved to from Firefox. Self-Signed certs I'm down with, http I'm not, and never will be for any reason. Plain-text data transmissions have no acceptable reasoning.
        • schmuckonwheels 23 hours ago
          You do realize self-signed certs are useless, could have been tampered with, and could have just as easily been created by a malicious actor?

          There's a reason most default self signed certs are called "snake oil".

          • vpShane 6 hours ago
            They're not useless. And I'm well aware of how MITM attacks work. Any hops along the path from my VPN endpoint to the server unencrypted can be, and are: viewed with plaintext. With a self signed certificate I can choose to accept the certificate or not. I'm not arguing to use them, I'm saying I've moved on from http, which is reasonable for me to do in today's 'get all of their data' age.
          • gldrk 21 hours ago
            You can pre-share the certificate out of band, or set up your browser to TOFU like SSH does. Then they are not useless and may be superior to PKI for certain threat models.
      • gldrk 21 hours ago
        PKI is basically powerless against nation states executing a targeted MITM attack. It does prevent them from passively snooping everything.
      • vhcr 21 hours ago
        You can enable it in the settings.
    • actionfromafar 1 day ago
      Posted on December 8, 2025 | Greg K-H

      It’s been almost 2 full years since Linux became a CNA⁰ (Certificate Numbering Authority) which meant that we (i.e. the kernel.org community) are now responsible for issuing all CVEs for the Linux kernel. During this time, we’ve become one of the largest creators of CVEs by quantity, going from nothing to number 3 in 2024 to number 1 in 2025. Naturally, this has caused some questions about how we are both doing all of this work, and how people can keep track of it.

      I’ve given a number of talks over the past years about this, starting with the Open Source security podcast right after we became¹ a CNA and then the Kernel Recipes 2024 talk, “CVEs are alive, but do not panic”² and then a talk³ at OSS Hong Kong 2024 about the same topic with updated numbers and later a talk at OSS Japan⁴ 2024 with more info about the same topic and finally for 2024 a talk with more detail⁵ that I can’t find the online version.

      In 2025 I did lots of work on the CRA⁶ so most of my speaking⁷ over this year has been about that topic , but the CVE assignment work continued on, evolving to meet many of the issues we had in our first year of being a CNA. As that work is not part of the Linux kernel source directly, it’s not all that visable to the normal development process, except for the constant feed on the linux-cve-announce mailing list⁸ I figured it was time to write down how this is all now working, as well a bunch of background information about how Linux is developed that is relevant for how we do CVE reporting (i.e. almost all non-open-source-groups don’t seem to know how to grasp our versioning scheme.)

      There is a in-kernel document⁹ that describes how CVEs can be asked for from the kernel community, as well as a basic summary of how CVEs are automatically asigned. But as we are an open community, it’s good to go into more detail as to how all of us do this work, explaining how our tools have evolved over time and how they work, why some things are the way they are for our releases, as well as document a way that people can track CVE assignments on their own in a format that is, in my opinion, much simpler than attempting to rely on the CVE json format (and don’t get me started on NVD…)

      So here’s a series of posts going into all of this, hopefully providing more information than you ever wanted to know, which might be useful for other open source projects as they start to run into many of the same issues we have already dealt with (i.e. how to handle reports at scale):

          Linux kernel versions, how the Linux kernel releases are¹⁰ numbered.
      (contents served over SSL, by virtue of YC)

      0: http://www.kroah.com/log/blog/2024/02/13/linux-is-a-cna/

      1: https://opensourcesecurity.io/2024/02/25/episode-417-linux-k...

      2: https://kernel-recipes.org/en/2024/cves-are-alive-but-no-not...

      3: https://www.youtube.com/watch?v=at-uDXbX-18

      4: https://www.youtube.com/watch?v=KumwRn1BA6s

      5: https://ossmw2024.sched.com/event/1sLVt/welcome-keynote-50-c...

      6: https://digital-strategy.ec.europa.eu/en/policies/cyber-resi...

      7: https://kernel-recipes.org/en/2025/schedule/the-cra-and-what...

      8: https://lore.kernel.org/linux-cve-announce/

      9: https://www.kernel.org/doc/html/latest/process/cve.html

      10: http://www.kroah.com/log/blog/2025/12/09/linux-kernel-versio...

  • 1970-01-01 1 day ago
    [flagged]
    • kvemkon 23 hours ago
      But how do you know, that if kroah.com would use Let's Encrypt it would belong to Greg K-H? What if his true WEB-site would be e.g. greg-k-h.com?
      • a99c43f2d565504 23 hours ago
        Right. Also, when it comes to the other aspects of TLS, such as preventing middlemen from making sense of what information flows between you and the server, what exactly is the threat in this case? I mean, it's a public blog post, which you only ask to read and so you are served.
        • vpShane 23 hours ago
          It's not about threat, it's about privacy. I understand your statements but 'what is the threat in this case' to answer that: I don't want to know, I've moved on from those worries. Always encrypt.
          • vhcr 21 hours ago
            What privacy? Whoever is watching your traffic can see you accessed their website with HTTPS, they can guess with high accuracy which article you are reading based on the response size.
            • vpShane 5 hours ago
              Any hops along the paths and whatever they split off to by whoever. And of course they can, even with HTTPS the Client Hello is unencrypted.

              Unencrypted data transmission just isn't a thing I'm interested in with it being 2025.

    • schmuckonwheels 1 day ago
      Objectively better than serving 12MB of JavaScript slop, trackers, and "analytics" over HTTPS so you can share a recipe for flan.

      Greg K-H has more credibility than 99% of posters here.

      He's literally the #2 guy in Linuxworld (behind Linus). What have you done?

      • vpShane 23 hours ago
        I enjoy this person's writings, and contributions. I am Linux's biggest fan and research cyber security daily.

        I would prefer https.

        • schmuckonwheels 23 hours ago
          I prefer a nice cappuccino, but sometimes all that's available is plain black coffee from the shared pot in the canteen (which someone could have tampered with).

          But we drink it anyway (at risk) because it's free.

          • rithdmc 9 hours ago
            "Quantum Insert" (packet injection) style attacks are easier without transport encryption.
      • 1970-01-01 23 hours ago
        You enumerated the security risks of clear text transmission over the Internet and everything came up green because the blogger works on Linux?
        • schmuckonwheels 23 hours ago
          If you are too afraid to click a cleartext HTTP link then don't; it's not for you. Just spare the rest of us the melodrama.

          While you are at it, better not ever update Debian or any number of other OSes because their updates are served over plain HTTP.

          • 1970-01-01 22 hours ago
            You almost had a great point here. If he began every blog rant with BEGIN PGP SIGNED MESSAGE and included a digital key somewhere secure, somewhere that I could go and verify, just Debian does with updates, I maybe could tolerate the cleartext. But he clearly didn't (pun alert!)
        • MobiusHorizons 19 hours ago
          Please don't get me wrong. I'm glad the world has mostly transitioned over to HTTPS, but what are you actually concerned about with reading a blog post over HTTP? If you had to log in or post form data, or hosted binaries or something I would get it. But what is wrong with reading an article in the clear? And how would SSL prevent that?