nvidia-smi hangs indefinitely after ~66 days

(github.com)

128 points | by tosh 3 hours ago

10 comments

  • foota 1 minute ago
    Wow, someone in the github comments[1] noticed that one of the bug numbers assigned internally for the issue matches to the day the number of days the driver would stay up.

    1: https://github.com/NVIDIA/open-gpu-kernel-modules/issues/971...

  • pajko 1 hour ago
    Timestamps should NOT be compared like this. Exactly this is why time_before() or time_after() exist.

    https://elixir.bootlin.com/linux/v6.15.7/source/include/linu...

  • wincy 2 hours ago
    Crazy, so if I understand correctly, something with B200s and nvlink is causing issues where after 66 days and 12 hours of uptime, nvidia-smi and other jobs start failing, timing out, then once you restart the cluster it starts working again.

    They suspect jobs will work if you only use 1 B200, but one person power cycled so wasn’t able to test it. Hopefully they won’t have to wait another 66 days for further troubleshooting.

    • layla5alive 2 hours ago
      Some 32-bit counter somewhere used when in NVLINK overflows?
      • themafia 1 hour ago
        66 days + 12 hours are 5,745,600,000,000,000 ns. The log2 of this is 52.351...

        Javascript and some other languages only have integer precision up to 52 bits then they switch to floating point.

        Curious.

        • loeg 47 minutes ago
          It's 32 bits of milliseconds, right? Hm, no, that would overflow much sooner (49.7 days).
          • oasisaimlessly 34 minutes ago
            It's a uint32_t of 750 Hz "jiffies", which does overflow at ~66 days.
      • mook 1 hour ago
        Isn't 32bit counter 49 days? Assuming that one was counting milliseconds, at least.

        Only remember that because that's the limit for Windows 95…

        • repiret 1 hour ago
          100ns intervals. My favorite part of that story is how long after Windows 95 was released before anybody discovered the bug.
  • yoshicoder 1 hour ago
    I wonder if the process to debugging this is just to search for what power of 2 times a time unit equals ~66 days
  • userbinator 1 hour ago
    I think it's an overflow of a scaled counter.

    Also, who else immediately noticed the AI-generated comment?

  • nulone 1 hour ago
    NVLink postRxDetLinkMask errors show up right before the hang. Has anyone captured a bug report or stack trace while nvidia-smi is stuck to see what it's blocking on?
  • blackoil 2 hours ago
    *China specific code leaked into mainline.
  • zeehio 1 hour ago
    66 days 14 hours and 24 minutes (66.6 days) would have been a far more diabolical hang...
  • grayhatter 2 hours ago
    a pet peeve of mine, (along with people brigading on issues/threads e.g. posting them to unrelated news sites... op....) is woefully incorrect language.

    > at day 66 all our jobs started randomly failing

    if there's a definable pattern, you can call it unpredictabily, but you can't call it randomly.

    • toast0 1 hour ago
      IMHO, what they said means that on day 65 all jobs work, on day 66, jobs work or don't, seemingly at random.

      But what they seem to be indicating is that all jobs fail on day 66. There's no randomness in evidence.

    • paulddraper 2 hours ago
      Unexpectedly is probably what they meant
    • stevenhuang 1 hour ago
      It's from the perspective of not knowing anything about the issue. It would look like jobs failing randomly one day when everything was fine the day before. Not hard to understand.
    • JohnLeitch 1 hour ago
      Seems quite predictable given the others in the bug report encountering the same.
  • jorl17 1 hour ago
    This is only very tangentially related, but I got flashbacks to a time where we had dozens of edge/IoT raspberry pi devices with completely unupgradeable kernels with a bug that would make the whole USB stack shut down after "roughly a week" (7-9 days) of uptime. Once it got shut down, the only way to fix it was to do a full restart, and, at the time, we couldn't really be restarting those devices (not even at night).

    This means that every single device would seemingly randomly completely break: touchscreen, keyboard, modems, you name it. Everything broke. And since the modem was part of it, we would lose access to the device — very hard to solve because maintenance teams were sometimes hours (& flights!) away.

    It seemed to happen at random, and it was very hard to trace it down because we were also gearing up for an absolutely massive (hundreds of devices, and then a couple of months later, thousands) launch, and had pretty much every conceivable issue thrown at us, from faulty USB hubs, broken modems (which would also kill the USB hub if they pulled too much power), and I'm sure I've forgotten a bunch of other issues.

    Plus, since the problem took a week to manifest, we couldn't really iterate on fixes quickly - after deploying a "potential fix", we'd have to wait a whole week to actually see if it worked. I can vividly remember the joy I had when I managed to get the issue to consistently happen only in the span of 2 hours instead of a week. I had no idea _why_, but at least I could now get serviceable feedback loops.

    Eventually, after trying to mess with every variable we could, and isolating this specific issue from the other ones, we somehow figured out that the issue was indeed a bug in the kernel, or at least in one of its drivers: https://github.com/raspberrypi/linux/issues/5088 . We had many serial ports and a pattern of opening and closing them which triggered the issue. Upgrading the kernel was impossible due to a specific vendor lock-in, and we had to fix live devices and ship hundreds of them in less than a month.

    In the end, we managed to build several layers on top of this unpatchable ever-growing USB-incapacitating bug: (i) we changed our serial port access patterns to significantly reduce the frequency of crashes; (ii) we adjusted boot parameters to make it much harder to trigger (aka "throw more memory at the memory leak"); (iii) we built a system that proactively detected the issue and triggered a USB reset in a very controlled fashion (this would sometimes kill the network of the device for a while, but we had no choice!); (iv) if, for some reason, all else failed, a watchdog would still reboot the system (but we really _really_ _reaaaally_ didn't want this to happen).

    In a way, even though these issues suck, it's when we are faced with them that we really grow. We need to grab our whole troubleshooting arsenal, do things that would otherwise feel "wrong" or "inelegant", and push through the issues. Just thinking back to that period, I'm engulfed by a mix of gratitude for how much I learned, and an uneasy sense of dread (what if next time I won't be able to figure it out)?

    • nomel 47 minutes ago
      Even National Instruments had this type of bug in their nivisa driver, that powers a good portion of lab and test equipment of the world. Every 31 days our test equipment would stop working, which happens to be the overflow of one of the windows timers. was also one of the fasted bug fix updates I ever saw, after reporting it!