23 comments

  • thewonderidiot 8 hours ago
    Mike Stewart here! I led the restoration of the AGC documented on CuriousMarc's channel and co-administrate VirtualAGC. There is a lot to unpack here.

    First: this is indeed a real bug in the AGC software. However, it did not go unnoticed for the whole program. It was discovered during level 3 testing of SATANCHE, and late development branch of the Command Module software COMANCHE. It was assigned anomaly number L-1D-02, and was fixed between Apollo 14 and 15. There are two known surviving copies of the L-1D-02 anomaly report:

    * https://www.ibiblio.org/apollo/Documents/contents_of_luminar...

    * https://www.ibiblio.org/apollo/Documents/contents_of_luminar...

    The fix described in the article is partially complete, but as noted in the anomaly report there's a little bit more to it. Rather than just adding the two instructions to zero LGYRO, they restructured the code a bit and also cause it to wake up pending jobs. You can compare the relevant sections of the Apollo 14 and Apollo 15 LM software here:

    * Apollo 14: https://github.com/virtualagc/virtualagc/blob/master/Luminar...

    * Apollo 15: https://github.com/virtualagc/virtualagc/blob/master/Luminar...

    The bug would not manifest silently in the way described in the article. For starters, LGYRO is also zeroed in STARTSB2, which is executed via GOPROG2 on any major program change: https://github.com/virtualagc/virtualagc/blob/master/Luminar...

    This means that changing from any program to any other program would immediately resolve the issue. This is almost certainly a large part of why it took them so long to notice. Hitting BADEND while actively pulse-torquing is quite rare, and avoided by normal procedure. The scenario presented in the article can't happen since the act of starting P52 will zero LGYRO.

    Moreover, in the very specific scenarios in which the bug can be triggered and remain, it results in multiple jobs stacking up attempting to torque the gyros. Eventually the computer runs out of space for new jobs -- similar to what happened on 11 -- and a 31202 (the Apollo 12+ equivalent of 1202) is triggered.

    Since the issue was found before the flight of Apollo 14, a further description of how it might occur and what the recovery procedure should be was added to the Apollo 14 Program Notes: https://www.ibiblio.org/apollo/Documents/LUM159_text.pdf#pag...

    Some other notes:

    > Ken Shirriff has analysed it down to individual gates

    I've done the bulk of the gate-level analysis. :)

    > the Virtual AGC project runs the software in emulation, having confirmed the recovered source byte-for-byte against the original core rope dumps.

    We've only been able to do that in very specific circumstances and only for subsections of assorted programs, but never for a full program. Most AGC software either comes from a program listing, from a core rope dump, or from reconstruction using changelogs and known memory bank checksums. We've disassembled all of the rope dumps into source files that assemble back into the same binary, but the comments and labels will be different from what was in the original listing. And to be extra clear: I've never had the opportunity to dump a module containing Apollo 11 software for either vehicle. Our sole source for both programs is a pair of printouts in the MIT Museum's collection.

    > Margaret Hamilton (as “rope mother” for LUMINARY) approved the final flight programs before they were woven into core rope memory.

    Jim Kernan was the rope mother for Luminary at least up through Apollo 11. Margaret was the rope mother for Comanche, the CM software, and was later promoted to lead the software division. Their positions at the time of 11 can be seen on this org chart: https://www.ibiblio.org/apollo/Documents/ApolloOrg-1969-02.p...

    > Their priority scheduling saved the Apollo 11 landing when the 1202 alarms fired during descent, shedding low-priority tasks under load exactly as designed.

    This is a huge topic on its own, but the AGC software was not designed to shed low-priority jobs. Ironically, the lowest priority job during the landing was the landing guidance itself, with high-priority jobs being reserved for things that needed quick response like antenna movements or display updates. If the computer were to shed the lowest-priority jobs, it would shed the landing guidance. This memo contains a list of all jobs active during the landing and their priorities: https://www.ibiblio.org/apollo/Documents/CherryApollo11Exege...

    > For example, the ICD for the rendezvous radar specified that two 800 Hz power supplies would be frequency-locked but said nothing about phase synchronisation. The resulting phase drift made the antenna appear to dither, generating roughly 6,400 spurious interrupts per second per angle and consuming roughly 13% of the computer’s capacity during Apollo 11’s descent. This was the underlying cause of the 1202 alarms.

    The frequency-lock prevents phase drift, so the phase is essentially fixed once the power supplies are up. Ironically, however, the bigger issue is that one reference was 28V while the other was 15V. Initial testing on actual Apollo hardware suggests that at least for Apollo 11, this voltage difference was the key contributor rather than the phase difference: https://www.youtube.com/watch?v=dT33c70EIYk

    • password4321 7 hours ago
      The front page has moved on but this is pure gold, thanks for making the time to share all these details.
    • replwoacause 3 hours ago
      One of the coolest replies I've ever seen on HN for sure. Thanks for taking the time to write this out!
    • jacquesm 6 hours ago
      I was hoping you'd comment here. Thank you. Amazing bits of lore.
  • ChicagoBoy11 18 hours ago
    For anyone who liked this, I highly suggest you take a look at the CuriousMarc youtube channel, where he chronicles lots of efforts to preserve and understand several parts of the Apollo AGC, with a team of really technically competent and passionate collaborators.

    One of the more interesting things they have been working on, is a potential re-interpretation of the infamous 1202 alarm. It is, as of current writing, popularly described as something related to nonsensical readings of a sensor which could (and were) safely ignored in the actual moon landing. However, if I remember correctly, some of their investigation revealed that actually there were many conditions which would cause that error to have been extremely critical and would've likely doomed the astronauts. It is super fascinating.

    • deepsun 18 hours ago
      And that's why it's harder (or easier?) to make the same landing again -- we taking way less chances. Today we know of way more failure modes than back then.
      • kevin_thibedeau 15 hours ago
        They sent people up in a tin can with the bare minimum computational power to manage navigation and control sequencing. It was barely safer than taking a barrel over Niagara Falls. We do have much more capable and reliable technology.
        • jvm___ 15 hours ago
          Buzz Aldrin (?) was quoted as recalling holding a pencil inside the capsule as they were out in space and thinking "that wall isn't very thick or strong, I could probably jam a pencil through it pretty easily..."

          Death being a layer of aluminum away changes your mind.

      • wat10000 17 hours ago
        It's a miracle nobody died in flight during the program. Exploding oxygen tank, rockets shaking themselves to pieces during launch, getting hit by lightning on top of a flying skyscraper full of kerosene and liquid oxygen....
        • djmips 15 hours ago
          Gus Grissom, Ed White, and Roger Chaffee died on the Apollo program. I feel it's not polite to ignore that fact even if you add an 'in flight' qualifier.
          • vondur 7 hours ago
            And it's even more interesting in the fact that our rocket program started with the former rocket scientists from Nazi Germany who were brought over at the end of WW2 to work in the American rocket/missile program.
        • thinkingtoilet 17 hours ago
          Starting from the first test pilots, a lot of people died for us to get to the point to launch that flight. So while no one died on the flight, lots of people died just getting us there. If I recall, in The Right Stuff, it's mentioned that those early test pilots had something like a 25% mortality rate.
          • wat10000 16 hours ago
            The early jet age was pretty nuts. Check the Wikipedia page for a random fighter from the era and you'll see figures like, 1,300 built, 50 lost in combat, 1,100 lost in accidents. And that's operational aircraft. Test pilots were in even more danger.
            • Quinner 16 hours ago
              Some were pretty bad, but none were nearly that bad. The B-58 Hustler lost 22% of its airframes, the F7U Cutlass 25%, the F-104 Starfighter in German service lost 33%. And those were outliers.
              • wat10000 14 hours ago
                You're right, those numbers are from the F-8 but include non-total-loss accidents.

                I don't think the numbers you quoted are outliers, though. The F-100 lost ~900 out of 2,300. The F-106 lost ~120/342. That's a pretty big list of planes with a 1/5-1/3 loss rate.

            • jaggederest 15 hours ago
              You should go back even a little further, the USPS air mail service lost 31 of the first 40 pilots.
              • mrguyorama 14 hours ago
                Back in the days where the plan was "So we've built literal signal fires and giant concrete arrows and well, good luck, it won't help"
            • ErroneousBosh 12 hours ago
              Have you ever listened to Robert Calvert's "Captain Lockheed and the Starfighters"?
          • ErroneousBosh 12 hours ago
            Think about the "failure mode" of the aircraft that won World War II, the Supermarine Spitfire.

            There was a fuel tank mounted between the engine and cockpit so if it took enough of a hit to puncture right through (not hard, in practice) the failure mode was that the cockpit was now full of a 350mph jet of burning petrol.

            Still, it did the job.

    • russdill 15 hours ago
      "popularly described" and how it's currently understood are two different things. Because it's hard to explain to lay people, it's popularly described in a number of simplified ways, but it's well understood.
      • garaetjjte 9 hours ago
        Since we are on HN, I think it could be explained there (before it's all consumed by AI slop):

        For complex reasons, available CPU time during landing was lower than expected (it was stolen by radar pointing peripheral). This caused regularly scheduled job to spawn before previous instance finished. As such, this caused two effects: job instances were suspended before finishing by new instances in the middle of the routine, and that pilling up of the old instances eventually exhausted resources and caused kernel to panic and reboot. Rebooting during landing sounds scary, but that actually was fine: such critical tasks were specifically designed to automatically restart from previously saved checkpoint data in the memory.

        What was more dangerous, was the suspended tasks before restarts occured. First, it meant routine wasn't executing to the end, which in actual flight caused blanked displays (as updating the display was the last thing routine was doing). Any more CPU time stolen, and it could be interrupted even earlier, eg. before it sends the engine commands.

        Another issue is that in case of fluctuating load, new instances could actually begin running to the end, and then previously suspended job instance could be resumed, potentially sending the stale data to the displays and engine.

        And finally, while each job instance had it own core and VAC set properly managed by the kernel (think of it as modern kernel switching between task stacks), that particular routine wasn't designed to be reentrant. So it was using various global variables ("erasables") for its own purpose, that when interrupted in unluckly place might have caused very bad behavior.

        How likely all of above is to occur, depends on the exact profile of fluctuating load caused by the confused radar peripheral. I guess that's why Mike Stewart is trying to replicate these issues with real CDU.

    • nativeit 16 hours ago
      Related topic on CuriousMarc and co.’s AGC restoration: https://news.ycombinator.com/item?id=47641528
  • buredoranna 18 hours ago
    Still my all time favorite snippet of code.

        TC    BANKCALL    # TEMPORARY, I HOPE HOPE HOPE
        CADR  STOPRATE    # TEMPORARY, I HOPE HOPE HOPE
        TC    DOWNFLAG    # PERMIT X-AXIS OVERRIDE
    
    https://github.com/chrislgarry/Apollo-11/blob/master/Luminar...
    • shagie 9 hours ago
      It's reference in The Codeless Code - https://thecodelesscode.com/case/234
    • f1shy 15 hours ago
      Cadr here has no relation with lisp cadr, right?
      • jasomill 14 hours ago
        Correct.

        CADR is an AGC assembly directive defining a "complete address" including a memory bank, in this case a subroutine to be called by the preceding BANKCALL (TC = transfer control, i.e., store return address and jump to subroutine), which switches to the memory bank specified in the CADR before jumping to the address specified in the CADR.

        For a brief explanation of AGC subroutine calls, see [1].

        CAR and CDR in Lisp come from the original implementation on the IBM 704, where pointers to the two components of a cons cell were stored as the (C)ontents of the (A)ddress and (D)ecrement fields of a (R)egister (memory word).

        (CADR x) is just shorthand for (CAR (CDR x)), i.e., a function that returns the second element of a list (assuming x is a well-formed list).

        [1] https://epizodsspace.airbase.ru/bibl/inostr-yazyki/American_...

      • jacquesm 15 hours ago
    • donkeyboy 17 hours ago
      Can you explain this to me?
      • dylan604 16 hours ago
        I think the point was the comments more than any of the code requiring explanation. There's nothing more permanent than a temporary solution
      • buredoranna 16 hours ago
        Wish I could... but I know of it from a previous HN post, where there is some discussion on its purpose.

        https://news.ycombinator.com/item?id=22367416

    • foxyv 13 hours ago
      I'm having a really bad Mandala effect right now where I remember some XKCD that wrote a poem about this. Maybe I'm thinking of another comic.
  • jwpapi 19 hours ago
    Has someone verified this was an actual bug?

    One of AI’s strengths is definitely exploration, f.e. in finding bugs, but it still has a high false positive rate. Depending on context that matters or it wont.

    Also one has to be aware that there are a lot of bugs that AI won’t find but humans would

    I don’t have the expertise to verify this bug actually happened, but I’m curious.

    • throwaway27448 18 hours ago
      It's not even clear if AI was used to find the bug: they mention modeling the software with an "ai native" language, whatever that means. What is not clear is how they found themselves modeling the gyros software of the apollo code to begin with.

      But, I do think their explanation of the lock acquisition and the failure scenario is quite clear and compelling.

      • ks2048 17 hours ago
        They have some spec language and here,

        https://github.com/juxt/Apollo-11/tree/master/specs

        have many thousands of lines of code in it.

        Anyways, it seems it would take a dedicated professional serious work to understand if this bug is real. And considering this looks like an Ad for their business, I would be skeptical.

      • jll29 18 hours ago
        > It's not even clear if AI was used to find the bug: they mention modeling the software with an "ai native" language, whatever that means.

        Could the "AI native language" they used be Apache Drools? The "when" syntax reminded me of it...

        https://kie.apache.org/docs/10.0.x/drools/drools/language-re...

        (Apache Drools is an open source rule language and interpreter to declaratively formulate and execute rule-based specifications; it easily integrates with Java code.)

      • caminante 18 hours ago
        How did you pick out AI native and miss the rest of the SAME sentence?

        > We found this defect by distilling a behavioural specification of the IMU subsystem using Allium, an AI-native behavioural specification language.

        • throwaway27448 17 hours ago
          That does not answer my confusion, especially when static analysis could reveal the same conclusion with that language. It's not clear what role ai played at all.
      • Aurornis 18 hours ago
        > It's not even clear if AI was used to find the bug

        The intro says “We used Claude and Allium”. Allium looks like a tool they’ve built for Claude.

        So the article is about how they used their AI tooling and workflow to find the bug.

        • throwaway27448 16 hours ago
          The article does not explain anything about how they used AI—it just has some relation with the behavioral model a human seems to have written (and an AI does not seem necessary to use!)
          • MBCook 15 hours ago
            Sure it does.

            They used their AI tool to extract the rules for the Apollo guidance system based on the source code.

            Then they used Claude to check if all paths followed those rules.

      • Qwuke 18 hours ago
        >It's not even clear if AI was used to find the bug

        It's not even clear you read the article

        • throwaway27448 17 hours ago
          Where do you think my confusion came from? All it says is that ai assists in resolving the gyroscope lock path, not why they decided to model the gyroscope lock path to begin with.

          Please, keep your offensive comments to yourself when a clarifying comment might have sufficed.

        • caminante 18 hours ago
          Even worse, the other child comments are speculating (and didn't RTFA either) when the answer is clear in the article.

          > We found this defect by distilling a behavioural specification of the IMU subsystem using Allium, an AI-native behavioural specification language.

          • wat10000 17 hours ago
            That's the opposite of clear to me.
            • Spinnaker_ 13 hours ago
              Has the article been updated?

              2nd paragraph starts with: "We used Claude and Allium"

              And later on: "With that obligation written down, Claude traced every path that runs after gyros_busy is set to true"

          • chrisjj 17 hours ago
            > distilling

            A.k.a. as fabricating. No wonder they chose to use "AI".

  • djmips 15 hours ago
    I think it's interesting that they found what seems to be a real bug (should be independantly verified by experts). However I find their story mode, dramatization of how it could have happened to be poorly researched and fully in the realm of fiction. An elbow bumping a switch, the command module astronaut unable to handle the issue with only a faux nod to the fact that a reset would have cleared up the problem and it was part of their training. So it's really just building tension and storytelling to make the whole post more edgy. And yes, this is 100% AI written prose which makes it even more distasteful to me.
    • jcalvinowens 12 hours ago
      > An elbow bumping a switch [..] really just building tension and storytelling to make the whole post more edgy.

      A guarded switch, no less.

      But personally I'm trying to be more generous about this sort of thing: it is very very difficult to explain subtle bugs like this to non-technical people. If you don't give them a story for how it can actually happen, they tend to just assume it's not real. But then when you tell a nice story, all us dry aged curmudgeons tut tut about how irreverent and over the top it is :)

      Finding the middle ground between a dry technical analysis and dramatization can be really hard when your audience is the entire internet.

    • retard4 15 hours ago
      [flagged]
  • croemer 15 hours ago
    I've had a look at the (vibe coded) repro linked in the article to see if it holds up: https://github.com/juxt/agc-lgyro-lock-leak-bug/blob/c378438...

    The repro runs on my computer, that's positive.

    However, Phase 5 (deadlock demonstration) is entirely faked. The script just prints what it _thinks_ would happen. It doesn't actually use the emulator to prove that its thinking is right. Classic Claude being lazy (and the vibe coder not verifying).

    I've vibe coded a fix so that the demonstration is actually done properly on the emulator. And also added verification that the 2 line patch actually fixes the bug: https://github.com/juxt/agc-lgyro-lock-leak-bug/pull/1

    • ErroneousBosh 11 hours ago
      > However, Phase 5 (deadlock demonstration) is entirely faked. The script just prints what it _thinks_ would happen.

      I see this a lot in AI slop, which I mostly get exposed to in the form of shitty pull requests.

      You know when you're trying to explain Test-Driven Development to people and you want to explain how you write the simplest thing that passes the test and then improve the test, right? So you say "I want a routine that adds VAT onto a price, so I write a test that says £20+VAT is £24, and the simplest thing that can pass that test is just returning 24". Now you know and I know that the routine and its test will break if you feed it any value except £20, but we've proved we can write a routine and its test, and now we can make it more general.

      Or maybe we don't care and we slap a big TODO: make this actually work on there because we don't need it to work properly now, we've got other things to do first, and every price coming up as £20+VAT is a useful indicator that we still have to make other bits work. It doesn't matter.

      The problem is that AI slop code "generators" will just stop at that point and go "THERE LOOK IT'S DONE AND IT'S PERFECT!" and the people who believe in the usefulness of AI will just ship it.

  • riverforest 18 hours ago
    Software that ran on 4KB of memory and got humans to the moon still has undiscovered bugs in it. That says something about the complexity hiding in even the smallest codebases.
    • whiplash451 18 hours ago
      My guess is that in such low memory regimes, program length is very loosely correlated with bug rate.

      If anything, if you try to cram a ton of complexity into a few kb of memory, the likelihood of introducing bugs becomes very high.

      • pooloo 18 hours ago
        Yet here we are compounding the issues by adding more and more layers to these systems... The higher the level it becomes the more security risks we take.
      • SoftTalker 14 hours ago
        Well you don't have room for a lot of "defensive" code. You write the program to function on expected inputs, and hope that all the "shouldn't happen" scenarios actually don't happen.
    • pvdebbe 5 hours ago
      Also contrast with the busy beaver problem and how much can be done with a small handful of instructions.
    • airstrike 15 hours ago
      ^ This is slop. Typical platitude that really means nothing.
  • chrisjj 18 hours ago
    > The specs were derived from the code itself

    Oh dear. I strongly suggest this author look specification up in a dictionary.

    • perching_aix 17 hours ago
      It's (what they're describing is) just reverse engineering. That's what reverse engineering is.
      • chrisjj 16 hours ago
        Fortunately reverse engineering too is in the dictionary - to help anyone mistaking it for spec generation.
        • perching_aix 14 hours ago
          Implying that I did make such mistake, which I did not, unless you're willfully taking me overly literal.

          Nor did they make any mistakes when they described how they produced a specification, (and indeed, that it is a specification) despite your insinuation otherwise, for a similar reason.

          Maybe instead of pointing towards dictionaries, stop pretending that you lack reading comprehension, and get off of your high horse please.

  • parliament32 15 hours ago
    Both the article and repo[1] are slop.

    [1] In the repo, the "reproduce" is just a bunch of print statements about what would happen, the bug isn't actually triggered: https://github.com/juxt/agc-lgyro-lock-leak-bug/blob/c378438...

  • callamdelaney 11 hours ago
    More likely the llm misinterpreted something and hallucinated an error. Just yesterday Claude code hallucinated itself an infinite loop.
  • bsoles 12 hours ago
    Another CTO "published" an AI slop to get attention to their vibe-coded company that will disappear in two years. Tell me something new...
  • wg0 19 hours ago
    Someone please amend the title and add "using claude code" because that's customary nowadays.
    • chrisjj 17 hours ago
      Also add "AI can make mistakes". Thank you.
      • sgt 16 hours ago
        Thank you for your attention to this matter.
  • kmeisthax 15 hours ago
    > Rust’s ownership system makes lock leaks a compile-time error.

    Rust specifically does not forbid deadlocks, including deadlocks caused by resource leaks. There are many ways in safe Rust to deliberately leak memory - either by creating reference count cycles, or the explicit .leak() methods on various memory-allocating structures in std. It's also not entirely useless to do this - if you want an &'static from heap memory, Box.leak() does exactly that.

    Now, that being said, actually writing code to hold a LockGuard forever is difficult, but that's mainly because the Rust type system is incomplete in ways that primarily inconvenience programmers but don't compromise the safety or meaning of programs. The borrow checker runs separately from type checking, so there's no way to represent a type that both owns and holds a lock at the same time. Only stacks and async types, both generated by compiler magic, can own a LockGuard. You would have to spawn a thread and have it hold the lock and loop indefinitely[0].

    [0] Panicking in the thread does not deadlock the lock. Rust's std locks are designed to mark themselves as poisoned if a LockGuard is unwound by a panic, and any attempt to lock them will yield an error instead of deadlocking. You can, of course, clear the poison condition in safe Rust if you are willing to recover from potentially inconsistent data half-written by a panicked thread. Most people just unwrap the lock error, though.

  • esafak 16 hours ago
    An application of their specification language, https://juxt.github.io/allium/

    It seems the difference between this and conventional specification languages is that Allium's specs are in natural language, and enforcement is by LLM. This places it in a middle ground between unstructured plan files, and formal specification languages. I can see this as a low friction way to improve code quality.

  • iJohnDoe 16 hours ago
    Fascinating read. Well done. Everyone involved in the Apollo program was amazing and had many unsung heroes.
  • garaetjjte 9 hours ago
    This article is garbage.

    >The Apollo Guidance Computer (AGC) is one of the most scrutinised codebases in history.

    What? AGC programs were developed by relatively small team and pretty much left alone since then. Architecture is rather quirky when viewed with modern sensibilities. There's not much people that are familiar with it. Compare it to widely used software like libcurl or sqlite. Or perhaps to Super Mario Bros, which was extensively analyzed for competitive speedruns reasons. Surely that dwarfs amount of knowledge about Apollo code.

    >2K of erasable RAM and a 1MHz clock. The AGC’s programs were stored in 74KB of core rope

    How about picking a unit and staying with it? AGC has 2K words of RAM, where each word has 15 bits of usable data (physically it's 16 bits, but one bit is used for parity). Maximum amount of ROM that could be installed is 36K words. (but they switch to KB, which is not only inconsistent with previous sentence but the number is also wrong! It's 72 KiB, 73.728 KB or 67.5 KiB, 69.12 KB depending whether you include parity or not) (maximum of 64K ROM words could be addressed by architecture design, but isn't available in any real hardware)

    And yes, there is 1.024 MHz clock in the system, which is revelant for peripherals, but you probably want to know how fast it executed instructions. One memory cycle takes 11.71875 μs (85 1/3 kHz), and most instructions take 2 such cycles (one for operation, second for fetching next instruction) (each memory cycle is long enough for read from ROM, or read and write to RAM. ROM speed was the limiting factor, by standard of core memories it wasn't particularly fast. AGS backup computer used core for both RAM and ROM and had memory cycle time of 5 μs) (in case you are confused, "core memory" and "core rope memory" refers to quite different things!).

    If you think I'm nitpicking, try writing an emulator and wondering why you have to sift through all that slop. You could give the correct numbers, you know?

    >“My secret terror for the last six months has been leaving them on the Moon and returning to Earth alone”, Collins later wrote of the rendezvous. A dead gyro system behind the Moon, with Armstrong and Aldrin on the surface waiting for a rendezvous burn that depends on a platform he can no longer align, is exactly that scenario. A hard reset would have cleared it. But the 1202 alarms during the lunar descent had been stressful enough with Mission Control on the line and Steve Bales making a snap abort-or-continue call. Behind the Moon, alone, with a computer that was accepting commands and doing nothing, Collins would have had to make that call by himself.

    You know what an orbit is? That it goes around? That you could just wait for a while and speak with Mission Control? What even is this scenario? That your guidance system failed, and you for some inexplicable reason are considiering immediately leaving back for Earth right now leaving your pals behind? (with a manual burn, I guess, since guidance is dead?) You just wait for contact with Houston and tell them what happened. They pore over the program listings and find the bug. They radio you back appropiate VERB and NOUN commands for poking right values into memory. The End. And besides, spacecraft can be tracked and orbit determined from Earth, so even if the PGNCS did fail completely LM would just get necessary orbit information from Mission Control. (also in case guidance fails in either LM or CM, either one can have active role during rendezvous. And LM have extra backup system, the previously mentioned AGS)

    The whole thing of "we found a minor deadlock bug in AGC program, what a shock!" is bizzare. It's not a small program. If you have any experience with software, of course you know it has bugs! They iterated on the software, releasing new software for most missions, adding new features, and, fixing bugs they found. What a concept!

  • totalmarkdown 15 hours ago
    is this bug the reason why the toilet malfunctioned?
    • dmoy 14 hours ago
      I don't think apollo 11's toilet malfunctioned, it was just not very good. Everything smelled like poop mixed with chemicals, and that was by design.
  • hackerman70000 16 hours ago
    [flagged]
    • msgilligan 16 hours ago
      But this seems like a reasonable approach for reverse-engineering, and it seems the bug they found is real.
    • jcalvinowens 12 hours ago
      The code was inconsistent with itself: that's not circular. Every path dropped the lock except one.
    • nraynaud 15 hours ago
      I took it as the extracted spec was weird and they looked into it.
  • devnotes77 7 hours ago
    [dead]
  • merlin1de 8 hours ago
    [flagged]
  • josephg 20 hours ago
    Super interesting. I wish this article wasn’t written by an LLM though. It feels soulless and plastic.
    • ChrisRR 19 hours ago
      It's not setting off any LLM alarm bells to me. It just reads like any other scientific article, which is very often soulless
      • Jolter 14 hours ago
        It repeats a few points too many times for a professional writer to not catch it.

        I don’t mind that they let an LLM write the text, but they should at least have edited it.

      • bbstats 17 hours ago
        the subheadings are extremely AI IMHO
        • fragmede 15 hours ago
          Isn't that just a normal way to organize a large document?
    • embedding-shape 20 hours ago
      Any specific sections that stick out? Juxt in the past had really great articles, even before LLMs, and know for a fact they don't lack the expertise or knowledge to write for themselves if they wanted and while I haven't completely read this article yet, I'd surprise me if they just let LLMs write articles for them today.
      • croemer 19 hours ago
        Here's one tell-tale of many: "No alarm, no program light."

        Another one: "Two instructions are missing: [...] Four bytes."

        One more: "The defensive coding hid the problem, but it didn’t eliminate it."

        • monooso 19 hours ago
          That's just writing. I frequently write like that.

          This insistence that certain stylistics patterns are "tell-tale" signs that an article was written by AI makes no sense, particularly when you consider that whatever stylistic ticks an LLM may possess are a result of it being trained on human writing.

          • croemer 19 hours ago
            These are just some of the good examples I found.

            My hunch that this is substantially LLM-generated is based on more than that.

            In my head it's like a Bayesian classifier, you look at all the sentences and judge whether each is more or less likely to be LLM vs human generated. Then you add prior information like that the author did the research using Claude - which increases the likelihood that they also use Claude for writing.

            Maybe your detector just isn't so sensitive (yet) or maybe I'm wrong but I have pretty high confidence at least 10% of sentences were LLM-generated.

            Yes, the stylistic patterns exist in human speech but RLHF has increased their frequency. Also, LLM writing has a certain monotonicity that human writing often lacks. Which is not surprising: the machine generates more or less the most likely text in an algorithmic manner. Humans don't. They wrote a few sentences, then get a coffee, sleep, write a few more. That creates more variety than an LLM can.

            Fun exercise: https://en.wikipedia.org/wiki/Wikipedia:AI_or_not_quiz

            • monooso 19 hours ago
              Here's an alternative way of thinking about this...

              Someone probably expended a lot of time and effort planning, thinking about, and writing an interesting article, and then you stroll by and casually accuse them of being a bone idle cheat, with no supporting evidence other than your "sensitive detector" and a bunch of hand-wavy nonsense that adds up to naught.

              • xmcqdpt2 18 hours ago
                To start, this is more or less an advertising piece for their product. It's pretty clear that they want to sell you Allium. And that's fine! They are allowed! But even if that was written by a human, they were compensated for it. They didn't expend lots of effort and thinking, it's their job.

                More importantly, it's an article about using Claude from a company about using Claude. I think on the balance it's very likely that they would use Claude to write their technical blog posts.

                • monooso 18 hours ago
                  > They didn't expend lots of effort and thinking, it's their job.

                  Your job doesn't require you to think or expend effort?

              • kenjackson 18 hours ago
                While I agree with the sentiment, using AI to write the final draft of the article isn’t cheating. People may not like it, but it’s more a stylistic preference.
                • TylerE 11 hours ago
                  Using AI and a human byline is 100% cheating.
                  • josephg 4 hours ago
                    Yeah I agree. Don't tell me you authored something when claude did the majority of the writing. Use claude if you want, but don't pretend you wrote the content when you didn't.

                    I also hate this style of plastic, pre-digested prose. Its soulless and uninteresting. Maybe I've just read too much AI slop. I associate this writing style with low quality, uninteresting junk.

              • bookofjoe 18 hours ago
                Yet another way the mere possibility of AI/LLM being involved diminishes the value of ALL text.

                If there is constant vigilance on the part of the reader as to how it was created, meaning and value become secondary, a sure path to the death of reading as a joy.

            • NetMageSCW 17 hours ago
              Those aren’t good examples - that’s just LLMs living for free in your head.
          • oscaracso 19 hours ago
            I am reminded of the Simpsons episode in which Principal Skinner tries to pass off the hamburgers from a near-by fast food restaurant for an old family recipe, 'steamed hams,' and his guest's probing into the kitchen mishaps is met with increasingly incredible explanations.
          • brookst 18 hours ago
            I’m so glad the witch hunt has moved on to phrasing so I get less grief for my em dashes.
          • gcr 19 hours ago
            See also: “I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me” by Marcus Olang', https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li...

            For what it’s worth, Pangram reports that Marcus’ article is 100% LLM-written: https://www.pangram.com/history/640288b9-e16b-4f76-a730-8000...

            • croemer 19 hours ago
              In theory, wouldn't be too hard be to settle the question if whether he used ChatGPT to write it: get Olang to write a few paragraphs by hand, then have people judge (blindly) if it's the same style as the article. Which one sounds more like ChatGPT.
              • jmalicki 15 hours ago
                When people judge blindly, the are more likely to think the human is the AI and the AI is the human.

                73% judged GPT 4.5 (edit: had incorrectly said 4o before)to be the human.

                https://arxiv.org/abs/2503.23674

                Not only are people bad at judging this, but are directionally wrong.

                • nothinkjustai 14 hours ago
                  There is research showing the contrary that is far more convincing:

                  > Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization.

                  https://arxiv.org/html/2501.15654v2

              • embedding-shape 19 hours ago
                The times I've written articles, and those have gone through multiple rounds of reviews (by humans) with countless edits each time, before it ends up being published, I wonder if I'd pass that test in those cases. Initial drafts with my scattered thoughts usually are very different from the published end results, even without involving multiple reviewers and editors.
          • 360MustangScope 19 hours ago
            I hate that I can’t write em dashes freely anymore without people accusing the writing of being AI generated.

            Even though they are perfect for usage in writing down thoughts and notes.

            • d1sxeyes 18 hours ago
              One thing you can try⸺admittedly it's not quite correct⸺is replacing them with a two-em dash. I've never seen an AI use one, and it looks pretty funky.
            • croemer 19 hours ago
              I have nothing against em dashes. As long as your writing is human, experienced readers will be able to tell it's human. Only less experienced ones will use all or nothing rules. Em dashes just increase the likelihood that the text was LLM generated. They aren't proof.
              • brookst 18 hours ago
                That nuance is lost on the majority of anti-AI folks who’ve learned they get positive social reactions by declaring essentially everything to be AI written and condemnable.

                “An em dash… they’re a witch!”… “it’s not just X, it’s Y… they’re a witch!”

                • andersonpico 17 hours ago
                  > anti-AI folks who’ve learned they get positive social reactions by declaring essentially everything to be AI written and condemnable.

                  that's a strawman alright; all the comments complaining how they can't use their writing style without being ganged up on are positive karma from my angle, so I'm not sure the "positive social reactions" are really aligned with your imagination. Or does it only count when it aligns with your persecution complex?

                  • NetMageSCW 17 hours ago
                    You have the same problem apparently. You think it’s okay to go witch hunting and accuse people with no real evidence.
              • NetMageSCW 17 hours ago
                Evidently there are no experienced readers who post AI accusations.
                • gopher_space 14 hours ago
                  Same weight as "there are no experienced men who'll ask a woman if she's pregnant."
            • NetMageSCW 17 hours ago
              Why do you care what others accuse you of?
          • nothinkjustai 14 hours ago
            No, it’s pretty obviously AI written. Not sure why you’re running so much interference for them…are you affiliated with this company?
          • butlike 18 hours ago
            [dead]
        • tapoxi 19 hours ago
          This is my exact writing style - I'm screwed.
          • croemer 19 hours ago
            I doubt you write like that. Where can I find your writing other than your comments which IMO don't read like the blog post?
        • TruffleLabs 19 hours ago
          This is just writing; terse maybe and maybe not grammatically correct, but people write like that.
          • croemer 19 hours ago
            It's not just terseness, it's the rhythm and "it's not x, it's y".

            In fact, the latter is the opposite of terseness. LLMs love to tell you what things are not way more than people do.

            See https://www.blakestockton.com/dont-write-like-ai-1-101-negat...

            (The irony that I started with "it's not just" isn't lost on me)

            • wk_end 17 hours ago
              > (The irony that I started with "it's not just" isn't lost on me)

              But an LLM wouldn't write "It's not just X, it's the Y and Z". No disrespect to your writing intended, but adding that extra clause adds just the slightest bit of natural slack to the flow of the sentence, whereas everything LLMs generate comes out like marketing copy that's trying to be as punchy and cloying as possible at all times.

        • djmips 15 hours ago
          "Here’s how the bug might have manifested."
    • gcr 19 hours ago
      For what it’s worth, Pangram thinks this article is fully human-written: https://www.pangram.com/history/f5f68ce9-70ac-4c2b-b0c3-0ca8...
      • Aurornis 18 hours ago
        The AI writing detectors are very unreliable. This is important to mention because they can trigger in the opposite direction (reporting human written text as AI generated) which can result in false accusations.

        It’s becoming a problem in schools as teachers start accusing students of cheating based on these detectors or ignore obvious signs of AI use because the detectors don’t trigger on it.

      • xmcqdpt2 19 hours ago
        Then pangram isn't very good, because that article is full of Claude-isms.
        • embedding-shape 19 hours ago
          > because that article is full of Claude-isms

          Not sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism.

          • snapcaster 18 hours ago
            Real talk. You're not just making a good point -- you're questioning the dominant paradigm
          • xmcqdpt2 18 hours ago
            I'm sure some human writers would write:

            > The specification forces this question on every path through the IMU mode-switching code. A reviewer examining BADEND would see correct, complete cleanup for every resource BADEND was designed to handle.

            > The specification approaches from the other direction: starting from LGYRO and asking whether any paths fail to clear it.

            > *Tests verify the code as written; a behavioural specification asks what the code is for.*

            However this is a blog post about using Claude for XYZ, from an AI company whose tagline is

            "AI-assisted engineering that unlocks your organization's potential"

            Do you really think they spent the time required to actually write a good article by hand? My guess is that they are unlocking their own organizations potential by having Claude writes the posts.

            • embedding-shape 18 hours ago
              > Do you really think they spent the time required to actually write a good article by hand?

              Given I'm familiar with Juxt since before, used plenty of their Clojure libraries in the past and hanged out with people from Juxt even before LLMs were a thing, yes, I do think they could have spent the time required to both research and write articles like these. Again, won't claim for sure I know how they wrote this specific article, but I'm familiar with Juxt enough to feel relatively confident they could write it.

              Juxt is more of a consultancy shop than "AI company", not sure where you got that from, guess their landing page isn't 100% clear what they actually does, but they're at least prominent in the Clojure ecosystem and has been for a decade if not more.

            • NetMageSCW 17 hours ago
              Your guess is worth what you paid for it.
        • DiffTheEnder 19 hours ago
          Is it possible for a tool to know if something is AI written with high confidence at all? LLMs can be tuned/instructed to write in an infinite number of styles.

          Don't understand how these tools exist.

          • gcr 19 hours ago
            The WikiEDU project has some thoughts on this. They found Pangram good enough to detect LLM usage while teaching editors to make their first Wikipedia edits, at least enough to intervene and nudge the student. They didn’t use it punatively or expect authoritative results however. https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipe...

            They found that Pangram suffers from false positives in non-prose contexts like bibliographies, outlines, formatting, etc. The article does not touch on Pangram’s false negatives.

            I personally think it’s an intractable problem, but I do feel pangram gives some useful signal, albeit not reliably.

        • cameronh90 19 hours ago
          It has Claude-isms, but it doesn't feel very Claude-written to me, at least not entirely.

          What's making it even more difficult to tell now is people who use AI a lot seem to be actively picking up some of its vocab and writing style quirks.

        • mbo 17 hours ago
          Pangram has a very low false positive rate, but not the best false negative rate: https://www.pangram.com/blog/third-party-pangram-evals
        • NetMageSCW 17 hours ago
          You sound like a flat earther and a moon landing denier combined.
      • croemer 18 hours ago
        Pangram doesn't reliably detect individual LLM-generated phrases or paragraphs among human written text.

        It seems to look at sections of ~300 words. And for one section at least it has low confidence.

        I tested it by getting ChatGPT to add a paragraph to one of my sister comments. Result is "100% human" when in fact it's only 75% human.

        Pangram test result: https://www.pangram.com/history/1ee3ce96-6ae5-4de7-9d91-5846...

        ChatGPT session where it added a paragraph that Pangram misses: https://chatgpt.com/share/69d4faff-1e18-8329-84fa-6c86fc8258...

        • gcr 18 hours ago
          This is useful, thanks! TIL
      • timdiggerm 18 hours ago
        So you're saying Pangram isn't worth much?
    • croemer 9 hours ago
      Incidental finding: another blog posts was written by Claude and they admit it openly in the last paragraph (not earlier):

         A Note on the Process
         To be clear about what happened here: Claude wrote this article.
      
      https://www.juxt.pro/blog/what-we-learned-from-34-clojure-in...
    • croemer 15 hours ago
      And it turns out at least the part about Rust and locks is plain wrong. What a surprise: https://news.ycombinator.com/reply?id=47676938&goto=item%3Fi...
    • TruffleLabs 19 hours ago
      "Written by an LLM" based on what data or symptom?
    • jandrese 14 hours ago
      AI tends to write like it is getting paid by the word. This article wasn't too egregious but an editor could have improved it.
    • ModernMech 20 hours ago
      I'm starting to develop a physiological response when I recognize AI prose. Just like an overwhelming frustration, as if I'm hearing nails on chalkboard silently inside of my head.
      • voodooEntity 19 hours ago
        I feel ya.... and i have to admit in the past i tried it for one article in my own blog thinking it might help me to express... tho when i read that post now i dont even like it myself its just not my tone.

        therefor decided not gonne use any llm for blogging again and even tho it takes alot more time without (im not a very motivated writer) i prefer to release something that i did rather some llm stuff that i wouldnt read myself.

    • monooso 19 hours ago
      You have no evidence that it was.
    • NiloCK 19 hours ago
      This is the top reply on a substantial percentage of HN posts now and we should discourage it.

      It is:

      - sneering

      - a shallow dismissal (please address the content)

      - curmudgeonly

      - a tangential annoyance

      All things explicitly discouraged in the site guidelines. [1]

      Downvoting is the tool for items that you think don't belong on the front page. We don't need the same comment on every single article.

      [1] - https://news.ycombinator.com/newsguidelines.html

      • timdiggerm 18 hours ago
        It's not a shallow dismissal; it's a dismissal for good reason. It's tangential to the topic, but not to HN overall. It's only curmudgeonly if you assume AI-written posts are the inevitable and good future (aka begging the question). I really don't know how it's "sneering", so I won't address that.
        • NetMageSCW 17 hours ago
          It’s a dismissal with no evidence i.e. it’s a witch hunt. And no one should support that.
        • s08148692 16 hours ago
          The fact that the whole thread has basically devolved into debates over if it is or isn't an LLM written article is proving well enough that it doesn't really matter one way or another
        • signatoremo 13 hours ago
          It is a witch hunt with no evidence whatsoever, all based on intuition. It is distraction from the main topic, a topic that enough people find interesting to stay on the top page. What was intellectually interesting has now become a bore fest of repeated back and forth. That’s disrespectful and inconsiderate. Write a new post about why do you think AI writing is dangerous. I don’t mind that. I’d upvote it.
      • josephg 4 hours ago
        The guidelines you linked says this:

        > Don't post generated comments or AI-edited comments. HN is for conversation between humans.

        The same principle applies to submissions. If you couldn't be bothered to write it, don't ask me to read it. HN is for humans.

      • masklinn 19 hours ago
        > Downvoting is the tool for items that you think don't belong on the front page.

        You can’t downvote submissions. That’s literally not a feature of the site. You can only flag submissions, if you have more that 31 karma.

        • ezfe 17 hours ago
          And flagging is appropriate when you think content is not authentic
        • NiloCK 18 hours ago
          Twelve year old account and who knows how much lurking before that and I've never noticed this. Good lord.

          Optimistically, I guess I can call myself some sort of live-and-let-live person.

      • bakugo 17 hours ago
        The site guidelines were written pre-AI and stop making sense when you add AI-generated content into the equation.

        Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.

        • NiloCK 15 hours ago
          > The site guidelines were written pre-AI and stop making sense when you add AI-generated content into the equation.

          Note: the guidelines are a living document that contain references to current AI tools.

          > Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.

          This is something worth saying about a pure slop content. But the "charge" against the current item is that a reader encountered a feeling that an LLM was involved in the production of interesting content.

          With enough eyeballs, all prose contains LLM tells.

          We don't need to be told every time someone's personal AI detection algorithm flags. It's a cookie-banner comment: no new information for the reader, but a frustratingly predictable obstacle to scroll through.

          • bakugo 11 hours ago
            We wouldn't need any personal AI detection algorithm flags if the authors simply stated up front that their content is AI generated.

            But they won't do that, because deep down they feel shameful about it (as they should).

      • monooso 19 hours ago
        No idea why you're being downvoted. I've done my bit to redress the balance, I hope others do the same.
    • rudhdb773b 19 hours ago
      Not to single out your comment, but it feels like it's gotten to the point where HN could use a rule against complaining about AI generated content.

      It seems like almost every discussion has at least someone complaining about "AI slop" in either the original post or the comments.

      • Aurornis 18 hours ago
        I disagree. I like to read articles and explore Show HN posts, but in the past 6 months I’ve wasted a lot of time following HN links that looked interesting but turned out to be AI slop. Several Show HN posts lately have taken me to repos that were AI generated plagiarisms of other projects, presented on HN as their own original ideas.

        Seeing comments warning about the AI content of a link is helpful to let others know what they’re getting into when they click the link.

        For this article the accusations are not about slop (which will waste your time) but about tell-tell signs of AI tone. The content is interesting but you know someone has been doing heavy AI polishing, which gives articles a laborious tone and has a tendency to produce a lot of words around a smaller amount of content (in other words, you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in)

        Being able to share this information is important when discussing links. I find it much more helpful than the comments that appear criticizing color schemes, font choices, or that the page doesn’t work with JavaScript disabled.

        • croemer 17 hours ago
          > you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in

          This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.

          • Aerolfos 17 hours ago
            > This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.

            You're fighting an uphill battle against the inherent tendency to produce more and longer text. There's also the regression to the mean problem, so you get less information (and more generic) even though the text is shorter.

            Basically, it doesn't work

      • chrisjj 17 hours ago
        You're suggesting this is the complainant's fault?
        • rudhdb773b 16 hours ago
          Yes. These HN guidlines already basically cover it:

          > Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

          > Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

          • josephg 4 hours ago
            Its not a person's work. It reads like an LLM's work. If you can't be bothered to write an article yourself, its incredibly arrogant to ask me to read it.

            Speaking of the HN guidelines, they also say this:

            > Don't post generated comments or AI-edited comments. HN is for conversation between humans.

          • chrisjj 16 hours ago
            > Yes. These HN guidlines already basically cover it:

            >> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

            >> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

            They don't. people. tangential.

        • NetMageSCW 17 hours ago
          Yes, because all of them are now irrational about the possibility of LLM writing something they read.
      • Gigachad 18 hours ago
        HN has gotten to the point where it’s not even worth clicking the link because of course it’s ai slop.

        There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.

      • furyofantares 17 hours ago
        Stop voting up slop articles and I'll stop commenting on it.
    • mpalmer 19 hours ago
      I've seen way, way worse. Either someone LLM-polished something they already wrote, or they did their own manual editing pass.

      The short sentence construction is the most suspicious, but I actually don't see anything glaring. It normally jumps out and hits me in the face.

      • bookofjoe 18 hours ago
        >Hemingway's 4 Fast Rules For Effective Writing

        1. Use Short Sentences

        https://www.wordsthatsing.com.au/post/hemingway-rules

        • mpalmer 17 hours ago
          I didn't say they're dispositive. I said they're suspicious. Most people don't write effectively.
          • NetMageSCW 17 hours ago
            So LLMs write effectively and when people do you accuse them of using an LLM?
            • mpalmer 16 hours ago
              No, they don't. They use short sentences in weird, stilted ways.
              • bookofjoe 11 hours ago
                But you have the ability to detect those "weird, stilted ways." Impressive.
    • iJohnDoe 11 hours ago
      I did not get any “written by LLM vibes”. I enjoyed it and it pulled me in to keep reading.

      Who gives a crap if it was written by an LLM. Read it or don’t read it. Your choice.

      If it conveys the idea and your learn something new, then it’s mission accomplished.

    • retard3 20 hours ago
      [flagged]
    • retard2 20 hours ago
      [flagged]
      • vrighter 20 hours ago
        it's actually the second one I read that fit that description.
  • yodon 19 hours ago
    This is so insightfully and powerfully written I had literal chills running down my spine by the end.

    What a horrible world we live in where the author of great writing like this has to sit and be accused of "being AI slop" simply because they use grammar and rhetoric well.

    • dotancohen 19 hours ago
      I was completely riveted the whole read. The description of Collins' dilemma is the first time I've seen an actual real world scenario described that might cause him to return to Earth alone.

      If an LLM wrote that, then I no longer oppose LLM art.

      • breakingcups 18 hours ago
        I thought that was the least likeable part of the article. They speculated wildly, somehow making the leap that a trained astronaut would not resort to a computer reset if the problems persisted to weave the narrative that this bug was super-duper-serious indeed. They didn't need that and it weakened the presentation.
  • MeteorMarc 18 hours ago
    Are there any consequences for the Artemis 2 mission (ironic)?