LLMs can be exhausting

(tomjohnell.com)

55 points | by tjohnell 3 hours ago

12 comments

  • cglan 1 hour ago
    I find LLMs so much more exhausting than manual coding. It’s interesting. I think you quickly bump into how much a single human can feasibly keep track of pretty fast with modern LLMs.

    I assume until LLMs are 100% better than humans in all cases, as long as I have to be in the loop there will be a pretty hard upper bound on what I can do and it seems like we’ve roughly hit that limit.

    Funny enough, I get this feeling with a lot of modern technology. iPhones, all the modern messaging apps, etc make it much too easy to fragment your attention across a million different things. It’s draining. Much more draining than the old days

    • hombre_fatal 1 hour ago
      I think the upper limit is your ability to decide what to build among infinite possibilities. How should it work, what should it be like to use it, what makes the most sense, etc.

      The code part is trivial and a waste of time in some ways compared to time spent making decisions about what to build. And sometimes even a procrastination to avoid thinking about what to build, like how people who polish their game engine (easy) to avoid putting in the work to plan a fun game (hard).

      The more clarity you have about what you’re building, then the larger blocks of work you can delegate / outsource.

      So I think one overwhelming part of LLMs is that you don’t get the downtime of working on implementation since that’s now trivial; you are stuck doing the hard part of steering and planning. But that’s also a good thing.

      • SchemaLoad 1 hour ago
        I've found writing the code massively helps your understanding of the problem and what you actually need or want. Most times I go into a task with a certain idea of how it should work, and then reevaluate having started. While an LLM will just do what you ask without questing, leaving you with none of the learnings you would have gained having done it. The LLM certainly didn't learn or remember anything from it.
        • jeremyjh 49 minutes ago
          In some cases, yes. But I’ve been doing this awhile now and there is a lot of code that has to be written that I will not learn anything from. And now, I have a choice to not write it.
      • galaxyLogic 38 minutes ago
        Right when you're coding with LLM it's not you asking the LLM questions, it's LLM asking you questions, about what to build, how should it work exactly, should it do this or that under what conditions. Because the LLM does the coding, it's you have to do more thinking. :-)

        And when you make the decisions it is you who is responsible for them. Whereas if you just do the coding the decisions about the code are left largely to you nobody much sees them, only how they affect the outcome. Whereas now the LLM is in that role, responsible only for what the code does not how it does it.

      • grey-area 19 minutes ago
        Maintenance is the hard part, not writing new code or steering and planning.
      • clickety_clack 1 hour ago
        I’d love to see what you’ve built. Can you share?
    • raincole 32 minutes ago
      If you care at code quality of course it is exhausting. It's supposed to be. Now there is more code for you to assure quality in the same length of time.
    • senectus1 41 minutes ago
      I imagine code reviewing is a very different sort of skill than coding. When you vibe code (assuming you're reading teh code that is written for you) you become a coder reviewer... I suspect you're learning a new skill.
      • pessimizer 15 minutes ago
        The way I've tried to deal with it is by forcing the LLM to write code that is clear, well-factored and easy to review i.e. continually forcing it to do the opposite of what it wants to do. I've had good outcomes but they're hard-won.

        The result is that I could say that it was code that I myself approved of. I can't imagine a time when I wouldn't read all of it, when you just let them go the results are so awful. If you're letting them go and reviewing at the end, like a post-programming review phase, I don't even know if that's a skill that can be mastered while the LLMs are still this bad. Can you really master Where's Waldo? Everything's a mess, but you're just looking for the part of the mess that has the bug?

        I'm not reviewing after I ask it to write some entire thing. I'm getting it to accomplish a minimal function, then layering features on top. If I don't understand where something is happening, or I see it's happening in too many places, I have to read the code in order to tell it how to refactor the code. I might have to write stubs in order to show it what I want to happen. The reading happens as the programming is happening.

  • rednafi 53 minutes ago
    I have always enjoyed the feeling of aporia during coding. Learning to embrace the confusion and the eventual frustration is part of the job. So I don’t mind running in a loop alongside an agent.

    But I absolutely loathe reviewing these generated PRs - more so when I know the submitter themselves has barely looked at the code. Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day. Reviewing this junk has become exhausting.

    I don’t want to read your code if you haven’t bothered to read it yourselves. My stance is: reviewing this junk is far more exhausting. Coding is actually the fun part.

    • shiandow 3 minutes ago
      The one thing I don't quite get is how running a loop alongside an agent is any different from reviewing those PRs.
    • anonzzzies 29 minutes ago
      I always wonder where HNers worked or work; we do ERP and troubleshooting on legacy systems for medium to large corps; PRs by humans were always pretty random and barely looked at as well, even though the human wrote it (copy/pasted from SO and changed it somewhat); if you ask what it does they cannot tell you. This is not an exception, this is the norm as far as I can see outside HN. People who talk a lot, don't understand anything and write code that is almost alien. LLMs, for us, are a huge step up. There is a 40 nested if with a loop to prevent it from failing on a missing case in a critical Shell (the company) ERP system. LLMs would not do that. It is a nightmare but makes us a lot of money for keeping things like that running.
      • sarchertech 3 minutes ago
        I currently work at one of the biggest tech companies. I’ve been doing this for over 20 years, and I’ve worked at scrappy startups, unicorns, and medium size companies.

        I’ve certainly seen my share of what I call slot driven development where a developer just throws things at the wall until something mostly works. And plenty if cut and paste development.

        But it’s far from the majority. It’s usually the same few developers at a company doing it, while the people who know what they’re doing furiously work to keep things from falling apart.

        If the majority of devs were doing this nothing would work. My worry is that AI lets the bad devs produce this kind of work on a massive scale that overwhelms the good devs ability to fight back or to even comprehend the system.

      • nightpool 2 minutes ago
        I would hope that most people who are technically competent enough to be on HN are technically competent enough to quit orgs with coding standards that bad. Or, they're masochists who have taken on the chamllenge of working to fix them
  • 193572 38 minutes ago
    It looks like Stockholm syndrome or a traditional abusive relationship 100 years ago where the woman tries to figure out how to best prompt her husband to do something.

    You know you can leave abusive relationships. Ditch the clanker and free your mind.

  • simonw 1 hour ago
    I wonder if it's more or less tiring to work with LLMs in YOLO/--dangerously-skip-permissions mode.

    I mostly use YOLO mode which means I'm not constantly watching them and approving things they want to do... but also means I'm much more likely to have 2-3 agent sessions running in parallel, resulting in constant switching which is very mentally taxing.

  • jeremyjh 42 minutes ago
    Most people reading this have probably had the experience of wasting hours debugging when exhausted, only to find it was a silly issue you’ve seen multiple times, or maybe you solve it in a few minutes the next morning.

    Working with an agent coding all day can be exhilarating but also exhausting - maybe it’s because consequential decisions are packed more tightly together. And yes cognition still matters for now.

  • anthonySs 1 hour ago
    llms aren’t exhausting it’s the hype and all the people around it

    same thing happened with crypto - the underlying technology is cool but the community is what makes it so hated

  • chalupa-supreme 1 hour ago
    I wanna say that it is indeed a “skill issue” when it comes to debugging and getting the agent in your editor of choice to move forward. Sometimes it takes an instruction to step back and evaluate the current state and others it’s about establishing the test cases.

    I think the exhausting part is more probably more tied to the evaluation of the work the agent is doing, understanding its thought process and catching the hang up can be tedious in the current state of AI reasoning.

  • veryfancy 1 hour ago
    In agent-mode mode, IMO, the sweet spot is 2-3 concurrent tasks/sessions. You don’t want to sit waiting for it, but you don’t want to context-switch across more than a couple contexts yourself.
    • SchemaLoad 1 hour ago
      That sounds exhausting having to non stop prompt and review without a second to stop and think.
      • colecut 50 minutes ago
        There is nothing dictating how long you stop and think for.
        • bombdailer 32 minutes ago
          Until that becomes the metric measured in performance reviews.
  • razorbeamz 28 minutes ago
    LLMs do not actually make anything better for anyone. You have to constantly correct them. It's like having a junior coder under your wing that never learns from its mistakes. I can't imagine anyone actually feeling productive using one to work.
    • jatora 20 minutes ago
      You need to learn to use the tool better, clearly, if you have such an unhinged take as this.
      • Sirental 11 minutes ago
        No to be fair I do see what he's saying. I see a major difference between the more expensive models and the cheaper ones. The cheaper (usually default) ones make mistakes all the damn time. You can be as clear as day with them and they simply don't have the context window or specs to make accurate, well reasoned desicions and it is a bit like having a terrible junior work alongside you, fresh out of university.
      • voxl 8 minutes ago
        It's not unhinged at all, it's a lack of imagination on both of your parts.
      • razorbeamz 14 minutes ago
        The only people who use LLMs "as a tool" are those who are incapable of doing it without using it at all.
  • siliconc0w 51 minutes ago
    I mostly do 2-3 agents yoloing with self "fresh eyes" review
  • somewhereoutth 30 minutes ago
    Of course. Any scenario where you are expected to deliver results using non-deterministic tooling is going to be painful and exhausting. Imagine driving a car that might dive one way or the other of its own accord, with controls that frequently changed how they worked. At the end of any decently sized journey you would be an emotional wreck - perhaps even an actual wreck.
  • dinkumthinkum 53 minutes ago
    Does anyone else see this as dystopian? Someone is unironically writing about how exhausted they are and up at night thinking about how they can be a better good-boy at prompting the LLM and reminding us how we shouldn't cope by blaming the AI or its supposed limitations (context size, etc). This is not a dig at the author. It just seems crazy that this is an unironic post. It's like we are gleefully running to the "Laughterhouse" and each reminding our smiling fellow passengers not to be annoyed at the driver if he isn't getting us there fast enough, without realizing the Slaughterhouse (yes, I am stealing the reference).

    Another way you can read this is as a new cult member that his chiding himself whenever he might have an intrusive thought that Dear Leader may not be perfect, after all.

    • Apocryphon 43 minutes ago
      I mean, how often do we feel the same thing about the compiler?
      • jplusequalt 1 minute ago
        I don't feel this? When my code breaks, I'm more likely to get frustrated with myself.