LLMs Are Not Fun

(orib.dev)

173 points | by todsacerdoti 1 hour ago

49 comments

  • jwaldrip 1 hour ago
    Typing is not fun. It robs me of my craft of holding my pencil and feeling it press against the paper with my hand... LLMs are merely a tool to achieve a similar end result. The different aspects of software development are an art. But even with LLMS, I critique and care about the code just as much as if I were writing it line by line myself. I have had more FUN being able to get all of my ideas on paper with LLMs than I have had over years of banging my head against a keyboard going down the rabbit hole on production bugs.
    • marcofloriano 1 hour ago
      It's not about typing, it's about writing. You don't type, you write. That's the paradigm. You can write with a pen or you can type a keyboard. Different ways, same goal. You write.

      LLMs code for you. They write for you.

      • glial 1 hour ago
        Yesterday I had semi-coherent idea for an essay. I told it to an LLM and asked for a list of authors and writings where similar thoughts have been expressed - and it provided a fantastic bibliography. To me, this is extremely fun. And, reading similar works to help articulate an idea is absolutely part of writing.

        "LLMs" are like "screens" or "recording technology". They are not good or bad by themselves - they facilitate or inhibit certain behaviors and outcomes. They are good for some things, and they ruin some things. We, as their users, need to be deliberate and thoughtful about where we use them. Unfortunately, it's difficult to gain wisdom like this a priori.

        • encyclopedism 1 hour ago
          As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
          • autoexec 1 hour ago
            Sadly all the AI is owned by companies that want to do all your art and writing so that they can keep you as a slave doing their laundry and dishes. Maybe we'll eventually see powerful LLMs running locally so that you don't have to beg some cloud service for permission to use it in the ways you want, but at this point most people will be priced out of the hardware they'd need to run it anyway.

            However you feel about LLMs or AI right now, there are a lot of people with way more money and power than you have who are primarily interested in further enriching and empowering themselves and that means bad news for you. They're already looking into how to best leverage the technology against you, and the last thing they care about is what you want.

          • xnx 1 hour ago
            You don't have to use them.
            • encyclopedism 1 hour ago
              You're wrong in saying so. Many companies are quite literally mandating their use, do a quick search on HN.
            • wahnfrieden 1 hour ago
              Only if you are already wealthy or fine with finding a new job

              If I were still employed, I would also not want my employer to tolerate peers of mine rejecting the use of agents in their work out of personal preference. If colleagues were allowed to produce less work for equal compensation, I would want to be allowed to take compensated time off work by getting my own work done in faster ways - but that never flies with salaried positions, and getting work done faster is greeted with more work to do sooner. So it would be demoralizing to work alongside and be required to collaborate with folks who are allowed to take the slow and scenic route if it pleases them.

              In other words, expect your peers to lobby against your right to deny agent use, as much as your employer.

              If what you really want is more autonomy and ownership over your work, rejecting tool modernity won't get you that. It requires organizing. We learned this lesson already from how the Luddite movement and Jacobin reaction played out.

              • tjr 27 minutes ago
                Why limit this to AI? There have been lots of programming tools which have not been universally adopted, despite offering productivity gains.

                For example, it seems reasonably that using a good programming editor like Emacs or VI would offer a 2x (or more) productivity boost over using Notepad or Nano. Why hasn't Nano been banned, forbidden from professional use?

              • catlifeonmars 21 minutes ago
                You’re assuming implicitly that the tool use in question always results in greater productivity. That’s not true across the board for coding agents. Let me put this another way: 99% of the time, the bottleneck is not writing code.
              • encyclopedism 34 minutes ago
                Very well put
          • dbtc 1 hour ago
            When I do dishes by hand I think all kinds of interesting thoughts.

            Anyway, we've had machines that do our dishes and laundry for a long while now.

          • bugglebeetle 1 hour ago
            As a former artist, I can tell you that you will never have good or sufficient ideas for your art or writing if you don’t do your laundry and dishes.

            A good proxy for understanding this reality is that wealthy people who pay people to do all of these things for them have almost uniformly terrible ideas. This is even true for artists themselves. Have you ever noticed how that the albums all tend to get worse the more successful the musicians become?

            It’s mundanity and tedium that forces your mind to reach out for more creative things and when you subtract that completely from your life, you’re generally left with self-indulgence instead of hunger.

        • blks 49 minutes ago
          So finding out information was fun for you. Would it be also fun if said LLM write your essay for you based on your semi-coherent idea?
      • flatline 1 hour ago
        I write what I want the LLM to do. Generating a satisfactory prompt is sometimes as much work as writing the code myself - it just separates the ideation from the implementation. LLMs are the realization of the decades-long search for natural language programming, dating at least as far back as COBOL. I personally think they are great - not 100% of the time, just as a tool.
      • xnx 1 hour ago
        > LLMs code for you. They write for you.

        A director is the most important person to the creation of a film. The director delegates most work (cameras, sets, acting, costumes, makeup, lighting, etc.), but can dive in and take low-level/direct control of any part if they choose.

      • jstummbillig 1 hour ago
        To get the LLM to code for me, I need to write.
      • ffsm8 1 hour ago
        have you actually done some projects with e.g. claude code? completely greenfield entirely up to yourself?

        because ime, youre completely wrong.

        I mean i get were youre coming from if you imagine it like the literal vibe coding how this started, but thats just a party trick and falls off quickly as the project gets more complex.

        to be clear, simple features in an existing project can often be done simply - with a single prompt making changes across mutliple files - but that only works under _some circumstances_ and bigger features / more indepth architecture is still necessary to get the project to work according to your ideas

        And that part needs you to tell the llm how it should do it - because otherwise youre rolling the dice wherever its gonna be a clusterfuck after the next 5 changes

      • daliusd 1 hour ago
        So does autocomplete. Why not treat LLM as next autocomplete iteration?
        • marcofloriano 1 hour ago
          Because they are not. Autocomplete completes the only thing you already thought. You solve the problem, the machine writes. Mechanical.

          LLMs defines paths, ideas, choose routes, analyze and so on. They don't just autocomplete. They create the entire poem.

          • daliusd 47 minutes ago
            Sometimes. Usually LLM does exactly what I ask it. There is not like there are million ways - usually 4-10.
        • b40d-48b2-979e 1 hour ago
          LLMs are generative and do not have a fixed output in the way past autocompletes have. I know when I accept "intellisense" or whatever editor tools are provided to me, it's using a known-set of completions that are valid. LLMs often hallucinate and you have to double-check everything they output.
          • yunwal 1 hour ago
            I don't know what autocomplete you're using but mine often suggests outright invalid words given the context. I work around this by simply not accepting them
            • b40d-48b2-979e 1 hour ago
              The high failure rate of LLM-based autocompletes has had me avoid those kind of features altogether as they waste my time and break my focus to double-check someone else's work. I was efficient before they were forced into every facet of our lives three years ago, and I'll be just as efficient now.
        • autoexec 1 hour ago
          Who'd want an autocomplete that randomly invents words and spellings while presenting them as real? It's annoying enough when autocomplete screws up every other ducking message I send by choosing actual words inappropriately. I don't need one that produces convincing looking word salad by shoving in lies too.
          • daliusd 26 minutes ago
            I wonder why people have such completely different experience with LLM
        • NegativeLatency 1 hour ago
          You could build one like that, but most implementations I've seen cross the line for me.

          Hard to define but feels similar to the "I know it when I see it" or "if it walks like a duck and quacks like a duck" definitions.

        • JohnFen 41 minutes ago
          Autocomplete annoys me, derails my train of thought, and slows me down. I'm happy that nobody forces me to use it. Likewise, I would greatly resent being forced to use LLMs.
        • witte 1 hour ago
          [dead]
    • maxweisel 1 hour ago
      100% this. I've had more fun using Claude Code because I get to spend more of my time doing the fun parts (design, architecture, problem solving, etc) and less time spent typing, fixing small compilation errors, looking up API docs to figure out that query parameters use camelcase instead of underscores.
      • monster_truck 1 hour ago
        You don't have to do any of that if you simply don't make mistakes in the first place FYI
        • bugglebeetle 41 minutes ago
          Attitudes like this one are why people prefer working with AI to code lol.
        • stackghost 1 hour ago
          This is why I exclusively write C89 when handling untrusted user input. I simply never make mistakes and so I don't need to worry about off-by-ones or overflows or memory safety or use after frees.

          Garbage collection and managed types are for idiots who don't know what the hell they're doing; I'm leet af. You don't need to worry about accidentally writing heartbleed if you simply don't make mistakes in the first place.

      • autoexec 1 hour ago
        I'd rather spend my time designing and writing code than spending it debugging and reformatting whatever an LLM cobbled together from stack overflow and github. 'Design, architecture, problem solving, etc' all takes a backseat when the LLM barfs out all the code and you have to either spend your time convincing it to output what you could have written yourself anyway or play QA fixing its slop all day long.
        • maxweisel 55 minutes ago
          Back when I would ask ChatGPT to write code, I would agree with you, but using Claude Code's planning mode is a night and day difference. You write out a list of specs, Claude writes up a plan (that for writing backend APIs has always been just about perfect for me if my spec is solid), and then Claude executes that plan to almost perfection, with small nudges along the way.

          If you're doing anything UI-based, it hasn't performed well for me, but for certain areas of software development, it's been an absolute dream.

    • ori_b 1 hour ago
      I never spent much of my coding time on typing. My most productive coding is done in my head, usually a mile or so into a walk.
      • linsomniac 1 hour ago
        >usually a mile or so into a walk

        My place for that is in the shower.

        I had one of those shower epiphanies a couple mornings ago... And I fed it into a couple LLMs while I was playing a video game (taking some time over the holidays to do that), and by the afternoon I had that idea as working code: ~4500 LOC with that many more in tests.

        People keep saying "I want LLMs to take out the laundry so I can do art, not doing the laundry while LLMs do art." This is an example of LLMs doing the coding, so I can rekindle a joy of gaming, which feels like it's leaning in the right direction.

      • 1718627440 1 hour ago
        Or on the toilet.
    • observationist 1 hour ago
      Radical change in the available technology is going to require radical shifts in perspective. People don't like change, especially if it involves degrading their craft. If they pivot and find the joy in the new process, they'll be happy, but people far more often prefer to be "right" and miserable.

      I have some sympathy for them, but AI is here to stay, and it's getting better, faster, and there's no stopping it. Adapt and embrace change and find joy in the process where you can, or you're just going to be "right" and miserable.

      The sad truth is that nobody is entitled to a perpetual advantage in the skills they've developed and sacrificed for. Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.

      AI is going to eat everything - there will be no domain in which it is better for humans to perform work than it will be to have AI do it. I'd even argue that for any given task, we're pretty much already there. Pick any single task that humans do and train a multibillion dollar state of the art AI on that task, and the AI is going to be better than any human for that specific task. Most tasks aren't worth the billions of dollars, but when the cost drops down to a few hundred dollars, or pennies? When the labs figure out the generalization of problem categories such that the entire frontier of model capabilities exceeds that of all humans, no matter how competent or intelligent?

      AI will be better, cheaper, and faster in any and every metric of any task any human is capable of performing. We need to figure out a better measure of human worth than the work they perform, and it has to happen fast, or things will get really grim. For individuals, that means figuring out your principles and perspective, decoupling from "job" as meaning and purpose in life, and doing your best to surf the wave.

    • tmcw 1 hour ago
      Unironically this: isn't writing on paper more fun than typing? Isn't painting with real paint and canvas more satisfying than with a stylus and an iPad? Isn't it more fun to make a home-cooked meal for your family than ordering out? Who stomps into the holiday celebration and tells mom that it'd be a lot more efficient to just get catering?

      Isn't there something good about being embodied and understanding a medium of expression rather than attempting to translate ideas directly into results as quickly as possible?

      • ianbutler 1 hour ago
        To you maybe, to someone else maybe not. It's really hard to pin down a universal framing for existence.

        My family eats out at a nice steak restaurant every Christmas no one wants to cook. None of us like to cook.

        • tmcw 51 minutes ago
          Yes, exactly: I'm not saying everyone loves to paint or cook or whatever, but that a lot of people do, and it's weird and bad for the response to this kind of article, in which someone shares that they are losing something they enjoyed, to be some form of "well, not everyone enjoys that."
    • 6r17 1 hour ago
      I was about to write something really emotional and clearly lacking any kind of self reflect ; then I read you again ; and I admit there is a lot of part of this that is true.

      I feel like it may be something inherently wrong in the interface more than the actual expression of the tool. I'm pretty sure we are in some painful era where LLM, quiet frankly, help a tons with an absurd amount of stuff, underlying tons and "stuff" because it really is about "everything".

      But it also generate a lot of frustrations ; I'm not convinced of the conversational status-quo for example ; and I could easily see something inspired directly from what you said about drawing ; there is something here about the experience - and it's really difficult to work on because it's inherently personal and may require to actually spend time, accumulate frustration to finally be able to express it through something else.

      Ok time to work lmao

    • EA-3167 1 hour ago
      Speaking as someone who despises writing freehand, and loves typing... what? I understand what you're trying to say, but you lost me very quickly I'm afraid. Whatever tool I use to write I'm still making every choice along the way, and that's true if I'm dictating, using a stylus to press into a clay tablet, or any other medium. An LLM is writing for me based on prompts, it's more analogous to hiring a very stupid person to write for you, and has very little to do with pens or keyboards.
  • maybewhenthesun 1 hour ago
    I wholeheartedly agree. I'm not saying LLMs are 'bad'. I'm not saying they are not useful. But to me personally they take out the fun parts from my profession.

    My role changes from coming up with solutions to babysitting a robotic intern. Not 100% of course. And of course an agent can be useful like 'intellisense on steroids'. Or an assistant who 'ripgreps' for me. There are advantages for sure. But for me the advantages don't match the disadvantages. LLMs take the heart out of what made me like programming: building stuff yourself with your near infinite lego box of parts and coming up with ideas yourself.

    I'm only half convinced the LLMs will become as important to coding as they seem . And I'm hoping a sane balance will emerge at the other end of the hype. But if it goes where OpenAI etc. want it to go I think I'll have to re-school to become an electrician or something...

    • dnautics 1 hour ago
      > LLMs take the heart out of what made me like programming: building stuff yourself with your near infinite lego box of parts and coming up with ideas yourself.

      i feel like that's all im doing with llms. just in the last hour i realized that i wanted an indexed string internpool instead of passing string literals. the LLM refactored everything and then i didn't have to worry about that lego piece anymore.

  • manugo4 1 hour ago
    That's bait. I've never had as much fun as now as a developer being able to develop side projects in matter of days.
    • encyclopedism 1 hour ago
      The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.

      For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.

      As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".

      I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.

      I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.

      • benlivengood 1 hour ago
        In the long run I think it's pretty unhealthy to make one's career a large part of one's identity. What happens during burnout or retirement or being laid off if a huge portion of one's self depends on career work?

        Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.

        Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.

        Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.

        • encyclopedism 1 hour ago
          I agree with much of what you say.

          Career being the core of one's identity is so ingrained in society. Think about how schooling is directed towards producing what 'industry' needs. Education for educations sake isn't a thing. Capitalism see's to this and ensures so many avenues are closed to people.

          Perhaps this will change but I fear it will be a painful transition to other modes of thinking and forming society.

          Another problem is hoarding. Wealth inequality is one thing but the unadulterated hoarding by the very wealthy means that wealth is unable to circulate as freely as it ought to be. This burdens a society.

          • b40d-48b2-979e 25 minutes ago

                Education for educations sake isn't a thing.
            
            It is but only for select members of society. Off the top of my head, those with benefits programs to go after that opportunity like 100% disabled veterans, or the wealthy and their families.
    • nicce 1 hour ago
      For a prototype, but something production ready requires almost similar amount of effort than it used to, if you care about good design and code quality.
      • phito 1 hour ago
        I really doesn't. I just ditched my wordpress/woocommerce webshop for a custom one that I made in 3 days with Claude, in C# blazor. It is better in every single way than my old webshop, and I have control over every aspect of it. It's totally production ready.

        The code is as good or even better than I would have written. I gave Claude the right guidelines and made sure it stayed in line. There are a bunch of playwright tests ensuring things don't break over time, and proving that things actually work.

        I didn't have to mess with any of the HTML/css which is usually what makes me give up my personal projects. The result is really, really good, and I say that as someone who's been passionate about programming for about 15 years.

        3 days for a complete webshop with Stripe integration, shipping labels and tracking automation, SMTP emails, admin dashboard, invoicing, CI/CD, and all the custom features that I used to dream of.

        Sure it's not a crazy innovative projet, but it brings me a ton of value and liberates me from these overengineered, "generic" bulky CMS. I don't have to pay $50 for a stupid plugin (that wouldn't really fit my needs anyway) anymore.

        The future is both really exciting and scary.

        • gherkinnn 1 hour ago
          I wish. I have all the rules and skill files and constraints in place and yet Claude 4.5 Sonnet continues to do strange things beyond a medium scale.

          But it does save me time in many other aspects, so I can't complain.

          • phito 1 hour ago
            I find that restricting it to very small modules that are clearly separated works well. It does sometimes do weird things, but I'm there to correct it with my experience.

            I just wish I could have competent enough local LLMs and not rely on a company.

            • wahnfrieden 1 hour ago
              The ones approaching competency cost tens of thousands in hardware to run. Even if competitive local models existed would you spend that to run them? (And then have to upgrade every handful of years.)
              • phito 56 minutes ago
                Nope, I wouldn't. I wish for competent local LLMs that don't require a supercomputer at home to run. One can dream!
          • wahnfrieden 1 hour ago
            Use Opus only, or use GPT 5.2 Codex High (with 5.2 Pro as oracle and for spec work)
      • qudat 1 hour ago
        You can be as specific as you want with an LLM, you can literally tell it to do “clean code” or use a DI framework or whatever and it’ll do it. Is it still work? Yes. But once you start using them you’ll realize how much code you actually write is safely in the realm of boilerplate and the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.

        Here’s a bunch of examples: moving code around, abstracting common functionality into a function and then updating all call sites, moving files around, pattern matching off an already existing pattern in your code. Sometimes it can be fun and zen or you’ll notice another optimization along the way … but most of the time it’s boring work an agent can is 10x faster than you.

        • encyclopedism 38 minutes ago
          > the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.

          This right here in your very own comment is the crux. Unless you're rich or run your own business, your employer (and many other employers) are right now counting down the days till they can think of YOU as boilerplate they want to farm YOU out to an LLM. At the very least where they currently employee 10 they are salivating about reducing it to 2.

          This means painful change for a great many people. Appeal by analogy to historical changes like motorised vehicles etc miss the QUALITATIVE change occurring this time.

          Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.

      • pizzafeelsright 1 hour ago
        Perhaps for the inexperienced or timid. Code quality is it compiles and design is it performs to spec. Does properly formatted code matter when you no longer have to read it?
        • deergomoo 43 minutes ago
          Formatted? I guess not really, because it’s trivially easy to reformat it. But how it’s structured, the data structures and algorithms it uses, the way it models the problem space, the way it handles failures? That all matters, because ultimately the computer still has to run the code.

          It may be more extreme than what you are suggesting here, but there are definitely people out there who think that code quality no longer matters. I find that viewpoint maddening. I was already of the opinion that the average quality of software is appalling, even before we start talking about generated code. Probably 99% of all CPU cycles today are wasted relative to how fast software could be.

          Of course there are trade-offs: we can’t and shouldn’t all be shipping only hand-optimised machine code. But the degree to which we waste these incredible resources is slightly nauseating.

          Just because something doesn’t have to be better, it doesn’t mean we shouldn’t strive to make it so.

        • nicce 1 hour ago
          > Does properly formatted code matter when you no longer have to read it?

          That is exactly the moment when you cannot say anything about the code and cannot fix single line by yourself.

          • phito 59 minutes ago
            I don't agree, I looked at most of the code the AI wrote in my project, I have a good idea of how it is architectured because I actively planned it. If I have a bug in my orders, I know I have to go to the orders service. Then it's not much harder than reading the code my coworkers write at my daily job.
            • nicce 34 minutes ago
              Parent comment implied that they don’t plan to read the code at all in the long term.
      • lloydatkinson 1 hour ago
        My overwhelming experience is that the sort of developers unironically using the phrase "vibe coding" are not interested in or care about good design and code quality.
        • pritambarhate 1 hour ago
          What is good design and code quality?

          If I can keep adding new features without introducing big regressions that is good design and good code quality. (Of course there will come a time when it will not be possible and it will need a rewrite. Same like software created by top paid developers from the best universities.)

          As long as we can keep new bugs to the same level as hand written code with LLM written code, I think, LLMs writing code is much superior just because of the speed with which it allows us to implement features.

          We write software to solve (mostly) business efficiency problems. The businesses which will solve those problems faster than their competitors will win.

        • bonesss 1 hour ago
          In light of OpenAI confessing to shareholders there’s no there there (being shocked by and then using Anthropics MCP, being shocked by and then using Anthropics Skills, opening up a hosted dev platform to milk my awesome LLM business ideas, and now revealing that inline ads a-la Google is their best idea so far to make, you know, make money…), I was thinking about those LLM project statistics. Something like 5-10% of projects are seeing a nice productivity bump.

          Standard distribution says some minority of IT projects are tragi-bad… I’ve worked with dudes who would copy and paste three different JavaScript frameworks onto the same page, as long as it worked…

          AirFryers are great household tabletop appliances that help people cook extraordinary dishes their ovens normally wouldn’t faster and easier than ever before. A true revolution. A proper chef can use one to craft amazing food. They’re small and economical, awesome for students.

          Chefs just call it “convection cooking” though. It’s been around for a minute. Chefs also know to go hot (when and how), and can use an actual deep fryer if and when they want.

          The frozen food bags here have AirFryer instructions now. The Michelin star chefs are still focusing on shit you could buy books about 50 years ago…

        • xnx 1 hour ago
          I care about product quality. If "good design" and "code quality" can't be perceived in the product they don't matter.

          I have no idea what the code quality is like in any of the software I use, but I can tell you all about how well they work, how easy to use they are, and how fast they run.

        • encyclopedism 1 hour ago
          Coding is merely a means to an end and not the end itself. Capitalism sees to it that a great many things are this way. Unfortunately only the results matter and not much else. I'm personally very sorry things are this way. What I can change I know not.
        • NitpickLawyer 1 hour ago
          Not sure it's the gotcha you want it to be. What you said is true by definition. That is, vibe coding is defined as not caring about code. Not to be confused with LLM-assisted coding.
    • flockonus 1 hour ago
      I certainly don't feel like the author, but it's someone else's perspective, not "bait".
      • skybrian 1 hour ago
        Maybe "scissor statement" would be more apt, at least for the headline.
    • causal 1 hour ago
      I can sympathize with what the author is saying but I agree that "LLMs are not fun" is a pretty coarse statement that invites disagreement.
    • ravenstine 1 hour ago
      I'm not sure I'm having more fun, at least not yet, since for me the availability of LLMs takes away some of the pleasure of needing to use only my intellect to get something working. On the other hand, yes, it is nice to be able to have Copilot work away on a thing for my side project while I'm still focused on my day job. The tradeoff is definitely worth it, though I'm undecided on whether I am legitimately enjoying the entire process more than I used to.
      • verdverm 1 hour ago
        You don't have to use LLMs the whole time. For example, I've gotten a lot done with AI and had the time to spend over the holidays on a long time side project... organically coding the big fun thing

        Replacing Dockerfiles and Compose with CUE and Dagger

    • cortesoft 23 minutes ago
      It is clearly an emotional question. My comment on here saying I enjoyed programming with an LLM has received a bunch of downvotes, even though I don't think the comment was derogatory towards anyone who feels differently.

      People seem to have a visceral reaction towards AI, where it angers them enough that even the idea that people might like it upsets them.

    • marcofloriano 1 hour ago
      The point of the OP is not the fun. It's the craft. He's losing his craft!
    • ZpJuUuNaQ5 1 hour ago
      >That's bait.

      For you, maybe. In my experience, the constant need for babysitting LLMs to avoid the generation of verbose, unmaintainable slop is exhausting and I'd rather do everything myself. Even with all the meticulously detailed instructions, it feels like a slot machine - sometimes you get lucky and the generated code is somewhat usable. Of course, it also depends of the complexity and scope of the project and/or the tasks that you are automating.

    • dboreham 1 hour ago
      I don't do side projects, but the LLM has completely changed the calculus about whether some piece of programming is worthwhile doing at all. I've been enjoying myself automating all sorts of admin/ops stuff that hitherto got done manually because there was never a clear 1/2 day of time to sit down and write the script. Claude does it while I'm deleting email or making coffee.
  • travisgriggs 27 minutes ago
    Largely agree. Thoreau said for every 1000 hacking at the leaves of evil, there was 1 hacking at the roots.

    Web programming is not fun. Years ago, a colleague who had pivoted in the early years said "Web rots your brain" (we had done some cool work together in real time optical food sorting).

    I know it (web programming) gives a lot of people meaning, purpose, and a paycheck, to become a specialist in an arcane art that is otherwise unplumbable by others. First it was just generally programming. But it's bifurcated into back end, front end, db, distributed, devops, meta, api, etc. The number of programmers I meet now days, who are at start ups that eventually "pivot" to making tools for the tool wielders is impressive (e.g. "we tried to make something for the general public, but that didn't stick, but on the way, we learned how to make a certain kind of pick axe and are really hoping we can get some institutional set of axe wielders at a big digging corporation to buy into what we're offering"). Instead of "Software is eating the world" the real story these days may be "Software is eating itself"

    Mired with a mountain of complexity we've created as a result of years of "throw it at the wall and ship what sticks", we're now doubling down on "stochastic programming". We're literally, mathematically, embracing "this probab[i]l[it]y works". The usefulness/appeal of LLMs is an indictment and a symptom. Not a cause.

    • tomwphillips 19 minutes ago
      I like this analysis.

      I'm constantly surprised by developers who like LLMs because "it's great for boiler plate". Why on earth were you wasting your time writing boiler plate before? These people are supposed to be programmers. Write code to generate the boiler plate or get abstract it away.

      I suppose the path of least resistance is to ignore the complexity, let the LLM deal with it, instead of stepping back and questioning why the complexity is even there.

  • kylecazar 1 hour ago
    Programming can also not be fun. Maybe if you only use models for the tedious bits, a balance will be struck.

    But, if you are in a work situation where LLM's are forced upon you in very high doses, then yes -- I understand the feeling.

  • chrisfosterelli 1 hour ago
    I was recently talking to a colleague I went to school with and they said the same thing, but for a different reason. We both did grad studies with a focus on ML, and at the time ML as a field seemed to be moving so fast. There was a lot of excitement around AI again finally after the 'AI winter'. It was easy to participate in bringing something new to the field, and there was so many unique and interesting models coming about every day. There was genuine discussion about a viable path to AGI.

    Now, basically every new "AI" feature feels like a hack on top of yet another LLM. And sure the LLMs seem to keep getting marginally better, but the only people with the resources to actually work on new ones anymore are large corporate labs that hide their results behind corporate facades and give us mere mortals an API at best. The days of coding a unique ML algorithm for a domain specific problem are pretty much gone -- the only thing people pay attention to is shoving your domain specific problem into an LLM-shaped box. Even the original "AI godfathers" seem mostly disinterested in LLMs these days, and most people in ML seem dubious that simply scaling up LLMs more and more will be a likely path to AGI.

    It seems like there's more excitement around AI for the average person, which is probably a good thing I suppose, but for a lot of people that were into the field they're not really that fun anymore.

    In terms of programming, I think they can be pretty fun for side projects. The sort of thing you wouldn't have had time to do otherwise. For the sort of thing you know you need to do anyway and need to do well, I notice that senior engineers spend more time babysitting them than benefitting from them. LLMs are good at the mechanics of code and struggle with the architecture / design / big picture. Seniors don't really think much about the mechanics of code, it's almost second nature, so they don't seem to benefit as much there. Juniors seem to get a lot more benefit because the mechanics of the code can be a struggle for them.

    • porker 1 hour ago
      > Now, basically every new "AI" feature feels like a hack on top of yet another LLM.

      LLM user here with no experience of ML besides fine-tuning existing models for image classification.

      What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?

      Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?

    • jiggawatts 21 minutes ago
      It’s now moving faster than ever. Huge strides have been made in interpretability, multi modality, and especially the theoretical understanding of how training interacts with high dimensional spaces. E.g.: https://transformer-circuits.pub/2022/toy_model/index.html
  • etaioinshrdlu 1 hour ago
    Like others here, I disagree completely. I find them very fun, almost too fun, like intellectual crack. The craziest ideas are now within reach.
    • theteapot 1 hour ago
      Cool, I get to call my Youtube, TikTok addiction my "intellectual crack" now. Only fair.
  • lukaslalinsky 27 minutes ago
    The most fun aspect of programming for me is designing something unique, whether it's an algorithm to some niche problem, or a simple API over a complex system. LLMs help me express my ideas in terms of specific code, sometimes it's just a prototype, sometimes the final product. I don't enjoy the coding part, I enjoy the thinking and designing part.
  • mgaunard 1 hour ago
    LLMs are great if you don't care about every little detail being correct nor having control of how everything works so that you can change it whenever the situation warrants it.

    Turns out that a lot of code is fine with this. Some parts of the industry still have more stringent standards however.

    • verdverm 1 hour ago
      > if you don't care about every little detail being correct nor having control of how everything works

      This is the same situation we were in decades ago, just before ai, and still are

      AI changes nothing about this statement, humans do not write prefect code

    • catigula 1 hour ago
      Nope. It's still faster to just prompt Claude and read all of the output, I'm sorry.
      • greggoB 40 minutes ago
        That's a very broad statement, explicitly covering all types of code and all kinds of coders. Are you really confident enough to make such an assertion?
        • catigula 25 minutes ago
          Yes. It's not just me: I'm a professional staff engineer, great.

          Andrej Karpathy is one of the best engineers in the country, George Hotz is one of the best engineers in the country, etc.

  • gallerdude 1 hour ago
    Today I showed Claude Code how to control my lights, and I'm having a blast.
    • jaggederest 1 hour ago
      Claude Code is absurdly good at setting up and configuring Home Assistant.
  • notepad0x90 1 hour ago
    Can't say I agree strongly, but I get OOP's frustration.

    It's just a tool, misuse of the tool can very much not be fun. When it's forced on you, most things tend to not be fun.

    But I am having lots of fun with LLMs, their application as well as assisting me with coding. What used to be a frustrating scour of the internet for solutions and examples is now a question away to an LLM. The not-so-fun things I used to dread, I'm letting the LLM tackle it. Most of the code I write could probably be written by an LLM, but I am choosing to write the code specifically because it is fun, and because maintaining LLM generated code is not so fun.

    I think this is a case of people taking extremes. Extremes are usually not a good thing. Don't over use or over depend on LLMs, but use them with moderation, letting them do things they shine at. Don't create solutions that are looking for problems (with any tool, not just LLMs). Don't fall for the deceptive traps of nostalgia, or be stuck in "back in my day".

    It goes both ways too, don't tell someone used to vim and nano to start using cursor.sh!

    I like driving sports cars in the desert, very fun. I hate driving anything in traffic. It's all about context.

    The internet can be very toxic at times, and very user-hostile at others. It can also be great. I like HN for example, as I'm sure many of you do. I don't like visiting gizmodo or some ad-trodden site, or toxic sub-reddits. LLMs are similar, there are and will be terrible LLM usages (truly, think of slaughterbots and LLMs being used by autonomous attack drones), but also fun and great usages.

  • porcoda 58 minutes ago
    I do wish when people say happy vs sad with LLMs for code they’d qualify it with what kind of code they’re talking about. I can totally see a web dev being super happy grinding out JS code and someone doing scientific computing being less happy even though they’re using the same tools. Without understanding what people are using it for, what their expectations are with respect to correctness, completeness, and performance, these discussions just turn into the same back and forth of people arguing that the other person is wrong and talking past each other. I think people on this site forget the diverse contexts where people use computers, the different backgrounds we all have, and our different expectations for what we work on.
  • bvan 1 hour ago
    Agree. Just like writing with pen and paper facilitates the thought process, so does coding. Typing out code facilitates logical thought and forces you to mind the details. Not to mention the inherent learning process.

    Hand-holding an LLM cheats me of all these things, along with the uneasy feeling there is unexplored ordnance in there somewhere which will eventually go boom.

    To each his or her own.

  • legitster 1 hour ago
    As someone who loves analogue things, you could basically repeat this for every pursuit humans make. The things they care about are cheapened by convenience - but then they will mock people who still care about manual transmissions or mechanical watches or what have you.

    I think LLMs are fun. It does not get rid of the problem solving or troubleshooting or decision making. If anything, for me it completely resparked the hacker ethos in me. I got my start by being an idiot "script kitty" - so I am used to building bodged together things with code I only liminally understand.

    There are so many new things I am trying and getting done that I feel like I am only limited by my creativity and my tolerance for risk.

  • amortka 1 hour ago
    There’s a third axis here besides “process vs result”: feedback loop latency. Hand-coding keeps the loop tight (think → type → run → learn), which is where a lot of the craft/joy lives. LLMs can either compress that loop (generate boilerplate/tests, unblock yak-shaves) or stretch it into “read 200 LOC of plausible code, then debug the one wrong assumption,” which feels like doing code review for an intern who doesn’t learn. The sweet spot for me has been using them to increase iteration speed while keeping myself on the hook for invariants (tests, types, small diffs), otherwise you’re just trading typing for auditing.
  • Insanity 1 hour ago
    I strongly agree with the author to be honest. I can see the various perspectives in the comments here. In my view, some people care about shipping products (i.e, seeing their idea come to life). Some people enjoy solving the problems more than the shipping.

    I'm in the second camp, and I think the author is as well. For those of us, LLMs are kind of boring.

  • ThierryBuilds 59 minutes ago
    Balancing developer satisfaction with raw productivity is a critical trade-off. While the 'joy of coding' maintains long-term engagement, LLMs provide a necessary lift in throughput. I prefer a surgical approach: disabling LLMs for core logic to avoid 'auto-pilot' bias, while utilizing them for the high-friction work of documentation and unit testing.
  • thunkle 1 hour ago
    Doesn't matter what we want or how we feel. Product, C-suite, Customers just want software as fast and as cheap as possible. They don't care about the code and the craft. If that's the case then we have to use AI if we want to stay marketable.

    I wonder if customers even appreciate the organic artisanal labels that some sites are putting up e.g. https://play.date/games/diora/

  • kwar13 1 hour ago
    I don't quite agree. LLMs have been fun for me in the sense that they have enabled me explore topics I wouldn't quite be familiar with and it would take too much for me to actually explore. For instance, putting a "restore last session" (restoring tabs) in my daily tabbed file explorer, Nemo. I wouldn't have cared enough to fork the repo and add this for myself if I didn't have an LLM to guide me through the parts that I would need to edit. It's too large of a code base for me to go digging through.
  • mrbonner 1 hour ago
    I wonder if LLMs and Gen AI represent a shift similar to the invention of the tractor a century ago. Initially, both technologies struggled to find their most effective applications, despite grand promises of transformative productivity. It took tractors several decades to become truly mainstream. Perhaps the same pattern will unfold with LLMs and AI. The difference this time is that companies are investing enormous amounts of capital in preparation for that inevitable moment of widespread adoption.
  • brightbed 1 hour ago
    I feel the opposite. Building my first cli dev tool with Claude brought back the joy of software for me, joy that had been eroded by the grind of the software industry. Claude helped me solve real problems that I didn’t otherwise have time to solve (so much typing) and I really enjoyed having this tool I had been dreaming of come to life.
  • turzmo 1 hour ago
    Strongly agree. I've given up a job handcrafting chairs to be a foreman at the chair factory. Yes, more chairs are produced. That's not what I care about.

    Coding for me was always about the understanding and craftsmanship. The associated output and pay came as an adult, but that was never the point.

  • alexpadula 1 hour ago
    Is this supposed to be rage bait? All jokes aside. I’ve been programming for 16+ years and I’ve been absolutely obsessed with it since I was a child. Programming today is amazing, I’d not prefer to go back to VS6 in modern times, I’d feel like I’m going backwards. Work in the times. Your pride is strong!
  • Bukhmanizer 1 hour ago
    As a long time complainer about AI, I disagree, LLMs are very fun. They’re maybe the coolest technology that has come out in the last 20 years for me.

    What’s not fun is the corpratization of AI. Being forced to use it even if it doesn’t make sense. Every project having to shove AI into it to get buy-in.

  • vtemian 1 hour ago
    > For me, the joy of programming is understanding a problem in full depth, so that when considering a change, I can follow the ripples through the connected components of the system.

    100%. The fun is in understanding, creating, exaplaining. Is not in typing, boilerplating, fixing missing imports, and API mismatch etc.

  • niorad 1 hour ago
    100% this! The main fun in development for me is typing and getting something to run, and seeing the feature finally work after figuring out how to get to it. The finished product is almost irrelevant to that. LLMs steal that feeling of achievement, like rushing through a book of sudoku with a solver.
  • qoez 1 hour ago
    Ultimately capitalism doesn't care if a job is fun or not. The vast majority aren't I've realized. It's an odd bit of coincidence that coding with flow is hugely enjoyable but it seems like that amazing job at this rate will be a momentary bit of history where profit making and fun had a non zero intersection for a strange reason.
  • wwarner 1 hour ago
    I reluctantly agree. It’s like ebikes — yes it’s great that I don’t have to pedal up hill, but on the other hand the cyclists that did it the hard way deserved the praise and glory for their achievement while weak and distracted ebikers definitely do not.
  • toenail 1 hour ago
    I mostly use them for stuff I would never get done otherwise, or for prototyping. I have lots of fun.
  • kalterdev 1 hour ago
    LLM enable me to extend the limited number of my individual thoughts. Sometimes they help me connect the dots faster. The only condition is: I do the main job. The final choice is always mine. I never let LLM be a blackbox independent agent.
  • sho_hn 1 hour ago
    I find posts like this very brave and courageous, and it makes me feel a lot of respect for the author and their personal integrity.

    There's currently an enormous pressure on developers to pay lip service to loving AI tools. Expressing a differing opinion easily gets someone piled on for being outdated or not understanding things, from people who sometimes mainly do it to virtue-signal and perform their own branding exercise.

    Open self-expression takes guts, and is hard to substitute for with AI assistance.

    • wayy 1 hour ago
      It seemed to me the author was simply sharing his own lived experience, which happens to be a bit contrarian to the popular hype around LLMs. It may seem courageous for some but I can see a world where the author didn't think twice about writing down his thoughts in 15 minutes and publishing on his own personal site. Perhaps it comes naturally to people who have been around this industry longer
    • encyclopedism 1 hour ago
      Here here. I totally agree with both the authors sentiments and your comments.
    • lloydatkinson 1 hour ago
      I agree, writing anything bad about LLM's is more or less antithetical to the current hype. At least last time, during the crypto/nft slop trend, HN was not on board with it.
      • oconnor663 1 hour ago
        Ehhhhhh hating on AI is also extremely popular on social media.
      • baq 1 hour ago
        I’m not writing good things about LLMs because I’m busy thinking about what to build next, since the things build faster than I can come up with ideas.

        It’s amazing and scary. I was wondering how takeoff would look like and I’m living it for better or worse.

    • Alex2037 1 hour ago
      bro, parroting "AI bad" on any social media full of the terminally online folx gets you free updoots.
    • devhouse 1 hour ago
      [dead]
  • minimaxir 1 hour ago
    How people derive utility from programming varies from person to person and I suspect is the root cause of most AI generation pipeline debates, creative and code-wise. There are two camps that are surprisingly mutually exclusive:

    a) People who gain value from the process of creating content.

    b) People who gain value from the end result itself.

    I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Getting frustrated with code configuration and writing boilerplate code is not personally gratifying.

    Recently, I have been experimenting more with Claude Code and 4.5 Opus and have had substantially more fun creating utterly bizarre projects that I suspect would have more frustration than fun implementing the normal way. It does still require brainpower to QA, identify problems, and identify potential fixes: it's not all vibes. The code quality, despite intuition, has no issues or bad code smells that is expected of LLM-generated code and with my approach actually runs substantially more performantly. (I'll do a full writeup at some point)

  • devhouse 1 hour ago
    There's a version of this that's about control vs. collaboration. Compilers do what you say. Teammates grow with you. LLMs do neither. They're confident strangers who (sometimes) get it right. That's a new category of relationship, and maybe we don't have the emotional toolkit for it yet?
  • alansaber 1 hour ago
    Imo a lot of the "this isn't fun" comes from minmaxing
  • shitter 1 hour ago
    I share the author's perspective that LLMs are not fun for programming. I don't use them to generate code, save for small snippets to demonstrate some concept or do something rote that I wouldn't enjoy writing myself.

    However - and maybe I'm just an easily entertained simpleton - I find them really fun for exploring those random, not trivially Google-able questions that pop into my head on a daily basis, technical and otherwise. Most of my chats with ChatGPT begin with questions of this form. I keep my critical thinking cap on during the dialogue and always verify the output if it's to be used for anything serious, but I'd be lying if I said I didn't enjoy the process.

  • Tarucho 37 minutes ago
    I don´t understand. Most answers say they want to program but that they don´t want to type, compile, debug, add files to the project, refactor, etc. Well that´s programming.

    Asking a prompt to do something is asking a prompt to do something.

    In my case I fear the day comes where I can not program anymore and I have to give orders to a prompt.

  • vtemian 48 minutes ago
    why was this post removed? it was #1
  • bugglebeetle 1 hour ago
    > For me, the joy of programming is understanding a problem in full depth, so that when considering a change, I can follow the ripples through the connected components of the system.

    >The joy of management is seeing my colleagues learn and excel, carving their own paths as they grow. Watching them rise to new challenges. As they grow, I learn from their growth; mentoring benefits the mentor alongside the mentee.

    I fail to grasp how using LLMs precludes either of these things. If anything, doing so allows me to more quickly navigate and understand codebases. I can immediately ask questions or check my assumptions against anything I encounter.

    Likewise, I don’t find myself doing less mentorship, but focusing that on higher-level guidance. It’s great that, for example, I can tell a junior to use Claude to explore X,Y, or Z design pattern and they can get their own questions answered beyond the limited scope of my time. I remember seniors being dicks to me in my early career because they were overworked or thought my questions were beneath them. Now, no one really has to encounter stuff like that if they don’t want to.

    I’m not even the most AI-pilled person I know or on my team, but it just seems so staggeringly obvious how much of a force multiplier this stuff has become over the last 3-6 months.

    • encyclopedism 1 hour ago
      As I've commented already...

      The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.

      If that is what you've been doing, a love for coding, I can well empathise how the world is changing underneath your feet.

  • johnfn 1 hour ago
    I find this interesting for two reasons.

    1. LLMs inspire and clearly strike a certain type of chord in people that other technology does not. For instance, can you imagine a post called "Rust Is Not Fun" at the top of HN? Or even replace "Rust" with technology that has some fans and some haters, like "PHP Is Not Fun". Can you ever imagine that finding equivalent traction? Why would you even write a post called "Skateboarding Is Not Fun"? I just did a search across HN and the only other thing I saw being called not fun that actually got traction was... Twitter.

    2. The post makes two points about why LLMs are wrong, and I (as someone who gets a lot of mileage out of LLMs) pretty strongly disagree with both.

    a) You can't get better at using LLMs ("Nurturing the personal growth of an LLM is an obvious waste of time")

    This seems almost objectively false to me? I have gotten substantially better at prompting and using LLMs after about 6 months of daily use. I think at a higher level of abstraction with LLMs than I would when working directly with code, and this is a different type of thinking that requires effort and practice to develop. An LLM doesn't solve all problems you could ever have, it just allows you to think at a higher level of abstraction.

    Previously I might convert one 300 LoC file from JS to TS (or whatever) and call it a day. Now, in the same amount of time, I might do 10. But obviously just asking the LLM to commit that doesn't cut it because I might have broken something, so I need to think of some way to get the LLM to verify that I haven't broken anything, so maybe I'd first get it to build some unit tests to my specifications, or a couple of linter rules, or something else tailored to the problem at hand. This is the "thinking at a higher level of abstraction" and I tend to find it an interesting puzzle. It's a very different type of thinking then the "how am I going to refactor this single file", but it ends up being pretty enjoyable.

    b) "For me, the joy of programming is understanding a problem in full depth"

    I suspect that this is why a lot of people are frustrated with LLMs. I empathize here more than with part a). I mean, sure, I like digging into complex systems - it's fun and rewarding. But... and I feel this will be controversial, but I feel like this isn't really the most meaningful part of coding. Isn't the "complex problem solving" mostly a side-effect of the thing which is really the important thing (and the thing I like much more), which is delivering software that people enjoy and use? And while LLMs do a good bit of heavy lifting on the first, I don't find that they are capable at all of solving the second - which means that coding is just as fun for me as ever. In fact, probably more so, since I feel like I can iterate and build ideas faster for users.

    • sjsdaiuasgdia 50 minutes ago
      > But... and I feel this will be controversial, but I feel like this isn't really the most meaningful part of coding. Isn't the "complex problem solving" mostly a side-effect of the thing which is really the important thing (and the thing I like much more)

      Different people can find enjoyment and meaning in different parts of work, even if they do the same kind of work.

      You enjoy delivering products to people. Other people might feel more enjoyment from the problem solving and understanding the system end to end.

      It's not controversial for people to have different preferences. What might be controversial is saying that your preference is more correct / somehow "better" than someone else's preference.

  • mythrwy 1 hour ago
    Maybe not fun but effective is even better.
  • petesergeant 1 hour ago
    > For me, the joy of programming is understanding a problem in full depth, so that when considering a change, I can follow the ripples through the connected components of the system … using LLMs undercuts [that]

    If you’re letting the LLM do things you aren’t spending the time to understand in depth, you are shirking your professional responsibilities

  • LogicFailsMe 1 hour ago
    If you understand their limitations, they are quite helpful and fun already. If you expect what the tech bros who can't code anything(tm) say they are, not so much. But I do expect them to improve because the market opportunity for getting anywhere close to the grandiose hype is huge. What isn't fun is the clueless C-suite force feeding them down the chain in hopes of a Hail Mary Pass to profits.

    Edit: I know, I know, blink 3 times to signal SOS. I clearly only wrote the above under duress and threats from my managers. There's simply nothing fun about interacting with an entity that would be the stuff of science fiction just 5 years ago, no sir!

  • itsthecourier 1 hour ago
    it's fun to become a necromancer

    I have become a general and a master of multitude of skeleton agents. my attention to the realm of managing effectively the unreproducible result of running the same incantations.

    As the sailor through the waters of the coastline he have roamed plenty of times, the currents are there, yet the waves are new everyday.

    Whatever limitation is removed, I should approach the market and test my creations swiftly and enrich myself, before the first legion of lich kings appear. they, better masters than I would ever be.

  • EugeneOZ 1 hour ago
    > Using LLMs undercuts both

    Absolutely disagree. I use LLM to speed up the process and ONLY accept code that I would write myself.

    • butler14 1 hour ago
      exactly

      end of the day, guys like the author, for better or worse, are going to be replaced by the next generation of developers who don't care for the 'aesthetics' in the same way

  • macinjosh 1 hour ago
    I see post after post like this on different parts of the web. What is always clear to me is these authors feel threatened, put too much of their personal identity and self-esteem into knowing the "secret tongue" of the machines. Generally are introverted and write software for themselves and people like them, not for normal folks.

    People like this have a great deal to personally lose from LLMs. It makes them substantially less "special". Or so they think, but it is actually not true at all.

    I think some of them resent having to level up again to stay relevant. Like when video games add more levels to a game you though you already beat. Fair enough, but such is life and natural competition.

    When they come at LLMs with this attitude (gritting their teeth while prompting) it is no wonder they are grossly offended and disgusted by its outputs.

    I've been tempted at times to hold these attitudes myself but my approach for now is to see how much I can learn about this tool and use it for as much as I can while tokens are subsidized. Either it all pops with the bubble or I have gained new, marketable skills. And no your hand coding skills don't just evaporate. In fact, I now I have a new found love of hand coding as a hobby since that part of my brain is no longer used up by the end of the day with coding tasks for Work.

  • Areibman 1 hour ago
    I've found criticism like this comes from people who feel as if LLMs pose a threat to their intelligence.
  • cortesoft 1 hour ago
    I have been programming for over 30 years now, and I have been re-energized by using LLMs for programming. I am having SO MUCH FUN building things with AI.

    For me, the fun part of programming is having the freedom to get my computer to do whatever I want. If I can't find a tool to do something, I can write it myself! That has been a magical feeling since I first discovered it all those years ago.

    LLMs gives me the ability to do even more things I want, faster. I can conceptualize what I want to create, I can specify the details as much as I want, and then use an LLM to make it happen.

    It is truly magical. I feel like I am programming in Star Trek, with the computer as an ally instead of as the a receptacle for my code.