After two years of vibecoding, I'm back to writing by hand

(atmoio.substack.com)

558 points | by mobitar 7 hours ago

81 comments

  • recursivedoubts 6 hours ago
    AI is incredibly dangerous because it can do the simple things very well, which prevents new programmers from learning the simple things ("Oh, I'll just have AI generate it") which then prevents them from learning the middlin' and harder and meta things at a visceral level.

    I'm a CS teacher, so this is where I see a huge danger right now and I'm explicit with my students about it: you HAVE to write the code. You CAN'T let the machines write the code. Yes, they can write the code: you are a student, the code isn't hard yet. But you HAVE to write the code.

    • orev 6 hours ago
      It’s like weightlifting: sure you can use a forklift to do it, but if the goal is to build up your own strength, using the forklift isn’t going to get you there.

      This is the ultimate problem with AI in academia. We all inherently know that “no pain no gain” is true for physical tasks, but the same is true for learning. Struggling through the new concepts is essentially the point of it, not just the end result.

      Of course this becomes a different thing outside of learning, where delivering results is more important in a workplace context. But even then you still need someone who does the high level thinking.

      • frankc 6 hours ago
        I think this is a pretty solid analogy but I look at the metaphor this way - people used to get strong naturally because they had to do physical labor. Because we invented things like the forklift we had to invent things like weightlifting to get strong instead. You can still get strong, you just need to be more deliberate about it. It doesn't mean shouldn't also use a forklift, which is its own distinct skill you also need to learn.

        It's not a perfect analogy though because in this case it's more like automated driving - you should still learn to drive because the autodriver isn't perfect and you need to be ready to take the wheel, but that means deliberate, separate practice at learning to drive.

        • WorldMaker 4 hours ago
          > people used to get strong naturally because they had to do physical labor

          I think that's a bit of a myth. The Greeks and Romans had weightlifting and boxing gyms, but no forklifts. Many of the most renowned Romans in the original form of the Olympics and in Boxing were Roman Senators with the wealth and free time to lift weights and box and wrestle. One of the things that we know about the famous philosopher Plato was that Plato was essentially a nickname from wrestling (meaning "Broad") as a first career (somewhat like Dwayne "The Rock" Johnson, which adds a fun twist to reading Socratic Dialogs or thinking about relationships as "platonic").

          Arguably the "meritocratic ideal" of the Gladiator arena was that even "blue collar" Romans could compete and maybe survive. But even the stories that survive of that, few did.

          There may be a lesson in that myth, too, that the people that succeed in some sports often aren't the people doing physical labor because they must do physical labor (for a job), they are the ones intentionally practicing it in the ways to do well in sports.

          • port11 1 hour ago
            I can’t attest to the entire past, but my ancestors on both sides were farmers or construction workers. They were fit. Heck, my dad has a beer gut at 65 but still has arm muscles that’ll put me to shame — someone lifting weights once a week. I’ve had to do construction for a summer and everyone there was in good shape.

            They don’t go to the gym, they don’t have the energy; the job shapes you. More or less the same for the farmers in the family.

            Perhaps this was less so in the industrial era because of poor nutrition (source: Bill Bryson, hopefully well researched). Hunter gatherer cultures that we still study today have tremendous fitness (Daniel Lieberman).

          • thaumasiotes 1 hour ago
            > I think that's a bit of a myth.

            Why do you think that? It's definitely true. You can observe it today if you want to visit a country where peasants are still common.

            From Bret Devereaux's recent series on Greek hoplites:

            > Now traditionally, the zeugitai were regarded as the ‘hoplite class’ and that is sometimes supposed to be the source of their name

            > but what van Wees is working out is that although the zeugitai are supposed to be the core of the citizen polity (the thetes have limited political participation) there simply cannot be that many of them because the minimum farm necessary to produce 200 medimnoi of grain is going to be around 7.5 ha or roughly 18 acres which is – by peasant standards – an enormous farm, well into ‘rich peasant’ territory.

            > Of course with such large farms there can’t be all that many zeugitai and indeed there don’t seem to have been. In van Wees’ model, the zeugitai-and-up classes never supply even half of the number of hoplites we see Athens deploy

            > Instead, under most conditions the majority of hoplites are thetes, pulled from the wealthiest stratum of that class (van Wees figures these fellows probably have farms in the range of ~3 ha or so, so c. 7.5 acres). Those thetes make up the majority of hoplites on the field but do not enjoy the political privileges of the ‘hoplite class.’

            > And pushing against the ‘polis-of-rentier-elites’ model, we often also find Greek sources remarking that these fellows, “wiry and sunburnt” (Plato Republic 556cd, trans. van Wees), make the best soldiers because they’re more physically fit and more inured to hardship – because unlike the wealthy hoplites they actually have to work.

            ( https://acoup.blog/2026/01/09/collections-hoplite-wars-part-... )

            ---

            > Many of the most renowned Romans in the original form of the Olympics and in Boxing were Roman Senators

            In the original form of the Olympics, a Roman senator would have been ineligible to compete, since the Olympics was open only to Greeks.

        • thesz 5 hours ago
          Weightlifting and weight training was invented long before forklifts. Even levers were not properly understood back then.

          My favorite historic example of typical modern hypertrophy-specific training is the training of Milo of Croton [1]. By legend, his father gifted him with the calf and asked daily "what is your calf, how does it do? bring it here to look at him" which Milo did. As calf's weight grew, so did Milo's strength.

          This is application of external resistance (calf) and progressive overload (growing calf) principles at work.

          [1] https://en.wikipedia.org/wiki/Milo_of_Croton

          Milo lived before Archimedes.

          • aaronbrethorst 4 hours ago
            Dad needs to respect that we need rest days.
            • thesz 1 hour ago
              Bulgarian Method does not have rest days: https://www.mashelite.com/the-bulgarian-method-is-worth-a-lo...

              Alexander Zass (Iron Samson) also trained each day: https://en.wikipedia.org/wiki/Alexander_Zass

              "He was taken as a prisoner of war four times, but managed to escape each time. As a prisoner, he pushed and pulled his cell bars as part of strength training, which was cited as an example of the effectiveness of isometrics. At least one of his escapes involved him 'breaking chains and bending bars'."

              Rest days are overrated. ;)

              • hxugufjfjf 10 minutes ago
                They are until you get injured, burned out or both and stop training all together.
          • gadflyinyoureye 1 hour ago
            I looked up the weight of cows from that era. Only about 400 lbs. Seems doable.
          • epiccoleman 1 hour ago
            > what is your calf, how does it do?

            ... it's a calf, dad, just like yesterday

          • chairmansteve 3 hours ago
            Milo might have had slaves, the forklifts of his time....
        • hennell 4 hours ago
          >if the goal is to build up your own strength I think you missed this line. If the goal is just to move weights or lift the most - forklift away. If you want to learn to use a forklift, drive on and best of luck. But if you're trying to get stronger the forklift will not help that goal.

          Like many educational tests the outcome is not the point - doing the work to get there is. If you're asked to code fizz buzz it's not because the teacher needs you to solve fizz buzz for them, it's because you will learn things while you make it. Ai, copying stack overflow, using someone's code from last year, it all solves the problem while missing the purpose of the exercise. You're not learning - and presumably that is your goal.

      • jrm4 5 hours ago
        I like this analogy along with the idea that "it's not an autonomous robot, it's a mech suit."

        Here's the thing -- I don't care about "getting stronger." I want to make things, and now I can make bigger things WAY faster because I have a mech suit.

        edit: and to stretch the analogy, I don't believe much is lost "intellectually" by my use of a mech suit, as long as I observe carefully. Me doing things by hand is probably overrated.

        • orev 4 hours ago
          The point of going to school is to learn all the details of what goes into making things, so when you actually make a thing, you understand how it’s supposed to come together, including important details like correct design that can support the goal, etc. That’s the “getting stronger” part that you can’t skip if you expect to be successful. Only after you’ve done the work and understand the details can you be successful using the power tools to make things.
          • charcircuit 1 hour ago
            The point of school for me was to get a degree. 99% of the time at school was useless. The internet was a much better learning resources. Even more so now that AI exists.
            • josephg 28 minutes ago
              I graduated about 15 years ago. In that time, I’ve formed the opposite opinion. My degree - the piece of paper - has been mostly useless. But the ways of thinking I learned at university have been invaluable. That and the friends I made along the way.

              I’ve worked with plenty of self taught programmers over the years. Lots of smart people. But there’s always blind spots in how they approach problems. Many fixate on tools and approaches without really seeing how those tools fit into a wider ecosystem. Some just have no idea how to make software reliable.

              I’m sure this stuff can be learned. But there is a certain kind of deep, slow understanding you just don’t get from watching back-to-back 15 minute YouTube videos on a topic.

        • bccdee 29 minutes ago
          > Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? — The Elements of Programming Style, 2nd edition, chapter 2

          If you weren't even "clever enough" to write the program yourself (or, more precisely, if you never cultivated a sufficiently deep knowledge of the tools & domain you were working with), how do you expect to fix it when things go wrong? Chatbots can do a lot, but they're ultimately just bots, and they get stuck & give up in ways that professionals cannot afford to. You do still need to develop domain knowledge and "get stronger" to keep pace with your product.

          Big codebases decay and become difficult to work with very easily. In the hands-off vibe-coded projects I've seen, that rate of decay was extremely accelerated. I think it will prove easy for people to get over their skis with coding agents in the long run.

        • storystarling 14 minutes ago
          The mech suit works well until you need to maintain stateful systems. I've found that while initial output is faster, the AI tends to introduce subtle concurrency bugs between Redis and Postgres that are a nightmare to debug later. You get the speed up front but end up paying for it with a fragile architecture.
        • ljm 26 minutes ago
          If all I know is the mech suit, I’ll struggle with tasks that I can’t use it for. Maybe even get stuck completely. Now it’s a skill issue because I never got my 10k hours in and I don’t even know what to observe or how to explain the outcome I want.

          In true HN fashion of trading analogies, it’s like starting out full powered in a game and then having it all taken away after the tutorial. You get full powered again at the end but not after being challenged along the way.

          This makes the mech suit attractive to newcomers and non-programmers, but only because they see product in massively simplified terms. Because they don’t know what they don’t know.

        • wrs 3 hours ago
          OK, it’s a mech suit. The question under discussion is, do you need to learn to walk first, before you climb into it? My life experience has shown me you can’t learn things by “observing”, only by doing.
        • quinnjh 5 hours ago
          This analogy works pretty well. Too much time doing everything in it and your muscles will atrophy. Some edge cases will be better if you jump out and use your hands.
          • WorldMaker 4 hours ago
            There's also plenty of mech tales where the mech pilots need to spend as much time out of the suits making sure their muscles (and/or mental health) are in good strength precisely because the mechs are a "force multiplier" and are only as strong as their pilot. That's a somewhat common thread in such worlds.
            • ekidd 3 hours ago
              Yes. Also, it's a fairly common trope that if you want to pilot a mech suit, you need to be someone like Tony Stark. He's a tinkerer and an expert. What he does is not a commodity. And when he loses his suit and access to his money? His big plot arc is that he is Iron Man. He built it in a cave out of a box of scraps, etc.

              There are other fictional variants: the giant mech with the enormous support team, or Heinlein's "mobile infantry." And virtually every variantion on the Heinlein trope has a scene of drop commandos doing extensive pre-drop checks on their armor.

              The actual reality is it isn't too had for a competent engineer to pair with Claude Code, if they're willing to read the diffs. But if you try to increase the ratio of agents to humans, dealing with their current limitations quickly starts to feel like you need to be Tony Stark.

              • hedgedoops2 3 hours ago
                For me the idea of "people piloting mech suits" brings up lost kids, like Shinji from nge.
              • bitwize 3 hours ago
                You don't need to be Tony Stark. But, "if you're nothing without the suit then you don't deserve it."
          • specproc 4 hours ago
            I like the electric bike as a metaphor. You can go further faster, but you quickly find yourself miles from home and out of juice, and you ain't in shape enough to get that heavy bugger back.
            • fragmede 3 hours ago
              As long as we're beating the metaphor... so don't do that? Make sure you charge the battery and that it has enough range to get you home, and bring the charger with you. Or in the LLMs case, make sure it's not generating a ball of mud (code). Refactor often, into discrete classes, and distinct areas of functionality, so that you're never miles from home and out of juice.
              • SpaceNoodled 3 hours ago
                At that rate, I would already be there if I had just walked.
        • xnx 2 hours ago
          > "it's not an autonomous robot, it's a mech suit."

          Or "An [electric] bicycle for the mind." Steve Jobs/simonw

        • bitwize 3 hours ago
          No, it's not a mech suit. A mech suit doesn't fire its canister rifle at friendly units and then say "You're absolutely right! I should have done an IFF before attacking that unit." (And if it did the engineer responsible should be drawn and quartered.) Mech-suit programming AI would look like something that reads your brainwaves and transduces them into text, letting you think your code into the machine. I'd totally use that if I had it.
        • PKop 4 hours ago
          > I want to make things

          You need to be strong to do so. Things of any quality or value at least.

      • treetalker 5 hours ago
        Misusing a forklift might injure the driver and a few others; but it is unlikely to bring down an entire electric grid, expose millions to fraud and theft, put innocent people in prison, or jeopardize the institutions of government.

        There is more than one kind of leverage at play here.

        • pjc50 5 hours ago
          > Misusing a forklift might injure the driver and a few others; but it is unlikely to bring down an entire electric grid

          That's the job of the backhoe.

          (this is a joke about how diggers have caused quite a lot of local internet outages by hitting cables, sometimes supposedly "redundant" cables that were routed in the same conduit. Hitting power infrastructure is rare but does happen)

          • _whiteCaps_ 4 hours ago
            At my last job we had the power taken out by a backhoe. It was loaded onto a trailer and either the operator forgot to lower the bucket, or the driver drove away before he had time to lower it.

            Regardless of whose fault it was, the end result was the bucket snagged the power lines going into the datacentre and caused an outage.

        • yetihehe 4 hours ago
          > but it is unlikely to bring down an entire electric grid

          Unless you happen to drive a forklift in a power plant.

          > expose millions to fraud and theft

          You can if you drive forklift in a bank.

          > put innocent people in prison

          You can use forklift to put several innocent people in prison with one trip, they have pretty high capacity.

          > jeopardize the institutions of government.

          It's pretty easy with a forklift, just try driving through main gate.

          > There is more than one kind of leverage at play here.

          Forklifts typically have several axes of travel.

          • mystifyingpoi 1 hour ago
            The level of pettiness in this comment is through the roof. I love it.
      • arghwhat 6 hours ago
        I do appreciate the visual of driving a forklift into the gym.

        The activity would train something, but it sure wouldn't be your ability to lift.

        • Lerc 3 hours ago
          A version of this does happen with regard to fitness.

          There are enthusiasts who will spend an absolute fortune to get a bike that is few grams lighter and then use it to ride up hills for the exercise.

          Presumably a much cheaper bike would mean you could use a smaller hill for the same effect.

          • arghwhat 2 hours ago
            From an exercise standpoint, sure, but with sports there is more to it than just maximizing exercise.

            If you practice judo you're definitely exercising but the goal is defeating your opponent. When biking or running you're definitely exercising but the goal is going faster or further.

            From an an exercise optimization perspective you should be sitting on a spinner with a customized profile, or maybe do some entirely different motion.

            If sitting on a carbon fiber bike, shaving off half a second off your multi-hour time, is what brings you joy and motivation then I say screw it to further justification. You do you. Just be mindful of others, as the path you ride isn't your property.

      • boilerupnc 4 hours ago
        I feel like the aviation pilot angst captured by "automation dependency" and the fears around skills loss is another great analogy. [0]

        [0] https://eazypilot.com/blog/automation-dependency-blessing-or...

      • hyperpape 5 hours ago
        How seriously do you mean the analogy?

        I think forklifts probably carry more weight over longer distances than people do (though I could be wrong, 8 billion humans carrying small weights might add up).

        Certainly forklifts have more weight * distance when you restrict to objects that are over 100 pounds, and that seems like a good decision.

        • burkaman 5 hours ago
          I think it's a good analogy. A forklift is a useful tool and objectively better than humans for some tasks, but if you've never developed your muscles because you use the forklift every time you go to the gym, then when you need to carry a couch up the stairs you'll find that you can't do it and the forklift can't either.

          So the idea is that you should learn to do things by hand first, and then use the powerful tools once you're knowledgeable enough to know when they make sense. If you start out with the powerful tools, then you'll never learn enough to take over when they fail.

          • bluGill 5 hours ago
            A forklift can do things no human can. I've used a forklift for things that no group of humans could - you can't physically get enough humans around that size object to lift it. (of course levers would change this)
          • AlexandrB 2 hours ago
            Yeah, it's a great analogy. Pushing it even further: a forklift is superhuman, but only in specific environments that are designed for it. As soon as you're off of pavement a forklift can't do much. As soon as an object doesn't have somewhere to stick the forks you need to get a bunch of other equipment to get the forklift to lift it.
        • _flux 5 hours ago
          You're making the analogy work: because the point of weightlifting as a sport or exercise is to not to actually move the weights, but condition your body such that it can move the weights.

          Indeed, usually after doing weightlifting, you return the weights to the place where you originally took them from, so I suppose that means you did no work at in the first place..

          • TeMPOraL 5 hours ago
            That's true of exercise in general. It's bullshit make-work we do to stay fit, because we've decoupled individual survival from hard physical labor, so it doesn't happen "by itself" anymore. A blessing and a curse.
      • ModernMech 46 minutes ago
        I've been showing my students this video of a robot lifting weights to illustrate why they shouldn't use AI to do their homework. It's obvious to them the robot lifting weights won't make them stronger.

        https://www.youtube.com/watch?v=Be7WBGMo3Iw

      • _heimdall 5 hours ago
        The real challenge will be that people almost always pick the easier path.

        We have a decent sized piece of land and raise some animals. People think we're crazy for not having a tractor, but at the end of the day I would rather do it the hard way and stay in shape while also keeping a bit of a cap on how much I can change or tear up around here.

      • wklm 4 hours ago
        I like the weightlifting parable!
      • stackedinserter 5 hours ago
        Unlike weightlifting, the main goal of our jobs is not to lift heavy things, but develop a product that adds value to its users.

        Unfortunately, many sdevs don't understand it.

        • burkaman 5 hours ago
          Yes but the goal of school is to lift heavy things, basically. You're trying to do things that are difficult (for you) but don't produce anything useful for anyone else. That's how you gain the ability to do useful things.
          • bluGill 5 hours ago
            Even after school, you need to lift weights once in a while or you lose your ability.

            I wouldn't want to write raw bytes like Mel did though. Eventually some things are not worth getting good at.

            • stackedinserter 5 hours ago
              Let's just accept that this weight lifting metaphor is leaky, like any other, and brings us to absurds like forklift operators need to lift dumbbells to keep relevant in their jobs.
              • bluGill 4 hours ago
                Forklift operators need to do something to exercise. They sit in the seat all day. At least as a programmer I have a standing desk. This isn't relevant to the job though.
          • stackedinserter 5 hours ago
            I kinda get the point, but why is that? The goal of school is to teach something that's applicable in industry or academia.

            Forklift operators don't lift things in their training. Even CS students start with pretty high level of abstraction, very few start from x86 asm instructions.

            We need to make them implement ALU's on logical gates and wires if we want them to lift heavy things.

            • lostdog 4 hours ago
              We begin teaching math by having students solve problems that are trivial for a calculator.

              Though I also wonder what advanced CS classes should look like. If they agent can code nearly anything, what project would challenge student+agent and teach the student how to accomplish CS fundamentals with modern tools.

              • burkaman 3 hours ago
                In one of my college classes, after you submitted your project you'd have a short meeting with a TA and/or the professor to talk through your solution. For a smaller advanced class I think this kind of thing is feasible and can help prevent blind copy/pasting. If you wrote your code with an LLM but you're still able to have a knowledgeable conversation about it, then great, that's what you're going to do in the real world too. If you can't answer any questions about it and it seems like you don't understand your own code, then you don't get a good grade even if it works.

                As an added bonus, being able to discuss your code with another engineer that wasn't involved in writing it is an important skill that might not otherwise be trained in college.

    • daxfohl 7 minutes ago
      Not only that, it's constitution. I'm finding this with myself. After vibe coding for a month or so I let my subscription expire. Now when I look at the code it's like "ugh you mean now I have to think about this with my own brain???"

      Even while vibe-coding, I often found myself getting annoyed just having to explain things. The amount of patience I have for anything that doesn't "just work" the first time has drifted toward zero. If I can't get AI to do the right thing after three tries, "welp, I guess this project isn't getting finished!"

      It's not just laziness, it's like AI eats away at your pride of ownership. You start a project all hyped about making it great, but after a few cycles of AI doing the work, it's easy to get sucked into, "whatever, just make it work".

    • goostavos 4 hours ago
      I had my first interview last week where I finally saw this in the wild. It was a student applying for an internship. It was the strangest interview. They had excellent textbook knowledge. They could tell you the space and time complexities of any data structure, but they couldn't explain anything about code they'd written or how it worked. After many painful and confusing minutes of trying to get them to explain, like, literally anything about how this thing on their resume worked, they finally shrugged and said that "GenAI did most of it."

      It was a bizarre disconnect having someone be both highly educated and yet crippled by not doing.

      • stahorn 3 hours ago
        Sounds a little bit like the stories from Feynman, e.g.: https://enlightenedidiot.net/random/feynman-on-brazilian-edu...

        The students had memorized everything, but understood nothing. Add in access to generative AI, and you have the situation that you had with your interview.

        It's a good reminder that what we really do, as programmers or software engineers or what you wanna call it, is understanding how computers and computations work.

      • drob518 4 hours ago
        Lots of theory but no practice.
        • sally_glance 4 hours ago
          More like using a calculator but not being able to explain how to do the calculation by hand. A probabilistic calculator which is sometimes wrong at that. The "lots of theory but no practice" has always been true for a majority of graduates in my experience.
          • drob518 28 minutes ago
            Surely, new grads are light on experience (particularly relevant experience), but they should have student projects and whatnot that they should be able to explain, particularly for coding. Hardware projects are more rare simply because they cost money for parts and schools have limited budgets, but software has far fewer demands.
      • yomismoaqui 3 hours ago
        This the kind of interaction that makes be think that there are only 2 possible futures:

        Star Trek or Idiocracy.

        • steve_adams_86 3 hours ago
          Hmmm, I think we're more likely to face an Idiocracy outcome. We need more Geordi La Forges out there, but we've got a lot of Fritos out here vibe coding the next Carl's Jr. locating app instead
        • antonvs 3 hours ago
          Star Trek illustrated the issue nicely in the scene where Scotty, who we should remember is an engineer, tries to talk to a computer mouse in the 20th century: https://www.youtube.com/watch?v=hShY6xZWVGE
      • vonneumannstan 3 hours ago
        This is exactly the end state of hiring via Leetcode.
    • jillesvangurp 6 hours ago
      What you as a teacher teach might have to adapt a bit. Teaching how code works is more important than teaching how to code. Most academic computer scientists aren't necessarily very skilled as programmers in any case. At least, I learned most of that after I stopped being an academic myself (Ph. D. and all). This is OK. Learning to program is more of a side effect of studying computer science than it is a core goal (this is not always clearly understood).

      A good analogy here is programming in assembler. Manually crafting programs at the machine code level was very common when I got my first computer in the 1980s. Especially for games. By the late 90s that had mostly disappeared. Games like Roller Coaster Tycoon were one of the last ones with huge commercial success that were coded like that. C/C++ took over and these days most game studios license an engine and then do a lot of work with languages like C# or LUA.

      I never did any meaningful amount of assembler programming. It was mostly no longer a relevant skill by the time I studied computer science (94-99). I built an interpreter for an imaginary CPU at some point using a functional programming language in my second year. Our compiler course was taught by people like Eric Meyer (later worked on things like F# at MS) who just saw that as a great excuse to teach people functional programming instead. In hindsight, that was actually a good skill to have as functional programming interest heated up a lot about 10 years later.

      The point of this analogy: compilers are important tools. It's more important to understand how they work than it is to be able to build one in assembler. You'll probably never do that. Most people never work on compilers. Nor do they build their own operating systems, databases, etc. But it helps to understand how they work. The point of teaching how compilers work is understanding how programming languages are created and what their limitations are.

      • throw10920 4 hours ago
        > Teaching how code works is more important than teaching how to code.

        People learn by doing. There's a reason that "do the textbook problems" is somewhat of a meme in the math and science fields - because that's the way that you learn those things.

        I've met someone who said that when he get a textbook, he starts by only doing the problems, and skipping the chapter content entirely. Only when he has significant trouble with the problems (i.e. he's stuck on a single one for several hours) does he read the chapter text.

        He's one of the smartest people I know.

        This is because you learn by doing the problems. In the software field, that means coding.

        Telling yourself that you could code up a solution is very different than actually being able to write the code.

        And writing the code is how you build fluency and understanding as to how computers actually work.

        > I never did any meaningful amount of assembler programming. It was mostly no longer a relevant skill by the time I studied computer science (94-99). I built an interpreter for an imaginary CPU at some point using a functional programming language in my second year.

        Same thing for assembly. Note that you built an interpreter for an imaginary CPU - not a real one, as that would have been a much harder challenge given that you didn't do any meaningful amount of assembly program and didn't understand low-level computer hardware very well.

        Obviously, this isn't to say that information about how a system works can't be learned without practice - just that that's substantially harder and takes much more time (probably 3-10x), and I can guarantee you that those doing vibecoding are not putting in that extra time.

      • techblueberry 5 hours ago
        > The point of this analogy: compilers are important tools. It's more important to understand how they work than it is to be able to build one in assembler. You'll probably never do that. Most people never work on compilers. Nor do they build their own operating systems, databases, etc. But it helps to understand how they work. The point of teaching how compilers work is understanding how programming languages are created and what their limitations are.

        I don't know that it's all these things at once, but most people I know that are good have done a bunch of spikes / side projects that go a level lower than they have to. Intense curiosity is good, and to the point your making, most people don't really learn this stuff just by reading or doing flash cards. If you want to really learn how a compiler works, you probably do have to write a compiler. Not a full-on production ready compiler, but hands on keyboard typing and interacting with and troubleshooting code.

        Or maybe to put another way, it's probably the "easiest" way, even though it's the "hardest" way. Or maybe it's the only way. Everything I know how to do well, I know how to do well from practice and repitition.

      • DHPersonal 6 hours ago
        I only learn when I do things, not when I hear how they work. I think the teacher has the right idea.
        • H1Supreme 3 hours ago
          A million percent! I was so bad at Math in school. Which I primarily blame on the arbitrary way in which we were taught it. It wasn't until I was able to apply it to solving actual problems that it clicked.
        • moritzruth 5 hours ago
          Yes, I do too, but the point they were trying to make is that "learning how to write code" is not the point of CS education, but only a side effect.
          • thfuran 5 hours ago
            A huge portion of the students in CS do intend the study precisely for writing code and the CS itself is more of a side effect.
            • Attrecomet 4 hours ago
              Which is a pretty big failure of somewhere in the education pipeline -- don't expect a science program to do what a trade is there for! (to be clear, I'm not trying to say the students are wrong in choosing CS in order to get a good coding job, but somewhere, expectations and reality are misaligned here. Perhaps with companies trying to outsource their training to universities while complaining that the training isn't spot-on for what they need?)
      • vidarh 4 hours ago
        > A good analogy here is programming in assembler. Manually crafting programs at the machine code level was very common when I got my first computer in the 1980s. Especially for games. By the late 90s that had mostly disappeared.

        Indeed, a lot of us looked with suspicion and disdain at people that used those primitive compilers that generated awful, slow code. I once spent ages hand-optimizing a component that had been written in C, and took great pleasure in the fact I could delete about every other line of disassembly...

        When I wrote my first compiler a couple of years later, it was in assembler at first, and supported inline assembler so I could gradually convert to bootstrap it that way.

        Because I couldn't imagine writing it in C, given the awful code the C compilers I had available generated (and how slow they were)...

        These days most programmers don't know assembler, and increasingly don't know languaes as low level as C either.

        And the world didn't fall apart.

        People will complain that it is necessary for them to know the languages that will slowly be eaten away by LLMs, just like my generation argued it was absolutely necessary to know assembler if you wanted to be able to develop anything of substance.

        I agree with you people should understand how things work, though, even if they don't know it well enough to build it from scratch.

        • user____name 2 hours ago
          > These days most programmers don't know assembler, and increasingly don't know languaes as low level as C either. And the world didn't fall apart.

          Maybe the world didn't fall apart, but user interactions on a desktop pc feel slower than ever. So perhaps they should.

      • QuadmasterXLII 5 hours ago
        When I did a CS major, there was a semester of C, a semester of assembly, a semester of building a verilog CPU, etc. I’d be shocked if an optimal CS education involved vibecoding these courses to any significant
      • jandrewrogers 3 hours ago
        While I may not write assembler, there is still significant value in being able to read assembler e.g. godbolt.
    • leros 22 minutes ago
      I see junior devs hyping vibe coding and senior devs mostly using AI as an assistant. I fall in the latter camp myself.

      I've hired and trained tons of junior devs out of university. They become 20x productive after a year of experience. I think vibe coding is getting new devs to 5x productivity, which seems amazing, but then they get stuck there because they're not learning. So after year one, they're a 5x developer, not a 20x developer like they should be.

      I have some young friends who are 1-3 years into software careers I'm surprised by how little they know.

      • saturnite 13 minutes ago
        If I find myself writing code in a way that has me saying to myself "there has to be a better way," there usually is. That's when I could present AI with that little bit of what I want to write. What I've found to be important is to describe what I want in natural language. That's when AI might introduce me to a better way of doing things. At that point, I stop and learn all that I can about what the AI showed me. I look it up in books and trusted online tutorials to make sure it is the proper way to do it.
    • danmaz74 4 hours ago
      When learning basic math, you shouldn't use a calculator, because otherwise you aren't really understanding how it works. Later, when learning advanced math, you can use calculators, because you're focusing on a different abstraction level. I see the two situations as very similar.
    • Isamu 4 hours ago
      Same with essay assignments, you exercise different neural pathways by doing it yourself.

      Recently in comments people were claiming that working with LLMs has sharpened their ability to organize thoughts, and that could be a real effect that would be interesting to study. It could be that watching an LLM organize a topic could provide a useful example of how to approach organizing your own thoughts.

      But until you do it unassisted you haven’t learned how to do it.

      • nonethewiser 4 hours ago
        The natural solution is right there in front of us but we hate to admit it because it still involves LLMs and changes on the teaching side. Just raise the bar until they struggle.
    • sltr 3 hours ago
      LLMs are not bicycles for the mind. They are more like E-bikes. More assist makes you go faster, but provides less exercise.

      https://www.slater.dev/2025/08/llms-are-not-bicycles-for-the...

    • pmarreck 4 hours ago
      I haven't done long division in decades, am probably unable to do it anymore, and yet it has never held me back in any tangible fashion (and won't unless computers and calculators stop existing)
      • Archer6621 4 hours ago
        That makes sense. Some skills just have more utility than others. There are skills that are universally relevant (e.g. general problem solving), and then there are skills that are only relevant in a specific time period or a specific context.

        With how rapidly the world has been changing lately, it has become difficult to estimate which of those more specific skills will remain relevant for how long.

      • tudelo 4 hours ago
        I am rather positive that if you were sat down in a room and couldn't leave unless you did some mildly complicated long division, you would succeed. Just because it isn't a natural thing anymore and you have not done the drills in decades doesn't mean the knowledge is completely lost.
        • pmarreck 3 hours ago
          If you are concerned that embedding "from first-principles" reasoning in widely-available LLM's may create future generations that cannot, then I share your concern. I also think it may be overrated. Plenty of people "do division" without quite understanding how it all works (unfortunately).

          And plenty of people will still come along who love to code despite AI's excelling at it. In fact, calling out the AI on bad design or errors seems to be the new "code golf".

    • WalterBright 5 hours ago
      I remember reading about a metal shop class, where the instructor started out by giving each student a block of metal, and a file. The student had to file an end wrench out of the block. Upon successful completion, then the student would move on to learning about the machine tools.

      The idea was to develop a feel for cutting metal, and to better understand what the machine tools were doing.

      --

      My wood shop teacher taught me how to use a hand plane. I could shave off wood with it that was so thin it was transparent. I could then join two boards together with a barely perceptible crack between them. The jointer couldn't do it that well.

      • ungreased0675 12 minutes ago
        This concept can be taken to ridiculous extremes, where learning the actual useful skill takes too long for most participants to get to. For example, the shop class teacher taking his students out into the wilderness to prospect for ore, then building their own smelter, then making their own alloy, then forging billet, etc.
      • WalterBright 4 hours ago
        Also, in college, I'd follow the derivation that the prof did on the chalkboard, and think I understood it. Then, doing the homework, I'd realize I didn't understand it at all. Doing the homework myself was where the real learning occurred.
      • darknavi 5 hours ago
        In middle school (I think) we spent a few days in math class hand-calculating trigonometry values (cosine, sin, etc.). Only after we did that did our teacher tell us that the mandated calculators that we all have used for the last few months have a magic button that will "solve" for the values for you. It definitely made me appreciate the calculator more!
    • GoatInGrey 2 hours ago
      "Why think when AI do trick?" is an extremely alluring hole to jump headfirst into. Life is stressful, we're short on time, and we have obligations screaming in our ear like a crying baby. It seems appropriate to slip the ring of power onto your finger to deal with the immediate situation. Once you've put it on once, there is less mental friction to putting it on the next time. Over time, gently, overuse leads to the wearer cognitively deteriorating into a Gollum.
      • ash_091 2 hours ago
        > "Why think when AI do trick?"

        > grug once again catch grug slowly reaching for club, but grug stay calm

    • criddell 5 hours ago
      They don't always do the simple things well which is even more frustrating.

      I do Windows development and GDI stuff still confuses me. I'm talking about memory DC, compatible DC, DIB, DDB, DIBSECTION, bitblt, setdibits, etc... AIs also suck at this stuff. I'll ask for help with a relatively straightforward task and it almost always produces code that when you ask it to defend the choices it made, it finds problems, apologizes, and goes in circles. One AI (I forget which) actually told me I should refer to Petzold's Windows Programming book because it was unable to help me further.

      • stavros 4 hours ago
        I'd prefer it to tell me it can't help me rather than write random code that I then have to spend time debugging.
    • JimmaDaRustla 36 minutes ago
      It doesn't PREVENT them from learning anything - said properly, it lets developers become lazy and miss important learning opportunities. That's not AIs fault.
    • nso 5 hours ago
      I agree 100%. But as someone with 25 years of development experience, holy crap it's nice not having to do the boring parts as much anymore.
    • nonethewiser 4 hours ago
      But what has changed? Students never had a natural reason to learn how to write fizz buzz. It's been done before and its not even useful. There has always been a arbitrary nature to these exercises.

      I actually fear more for the middle-of-career dev who has shunned AI as worthless. It's easier than ever for juniors to learn and be productive.

    • ray_v 4 hours ago
      Lots of interesting ways to spin this. I was in a computer science course in the late 90s and we were not allowed to use the C++ standard library because it made you a "lazy programmer" according to the instructor. I'm not sure if I agree with that, but the way that I look at it is that computer science all about abstraction, and it seems to me that AI, generative pair programming, vibe coding or what ever you want to call it is just another level of abstraction. I think what is probably more important is to learn what are and are not good programming and project structures and use AI to abstract the boilerplate,. scaffolding, etc so that you can avoid foot guns early on in your development cycle.
      • GoatInGrey 2 hours ago
        The counterargument here is that there is a distinction between an arbitrary line in the sand (C++ stdlb is bad) and using a text-generating machine to perform work for you, beginning to end. You are correct that as a responsibly used tool, LLMs offer exceptional utility and value. Though keep in sight the laziness of humans who focus on the immediate end result over the long-term consequences.

        It's the difference between the employee who copy-pastes all of their email bodies from ChatGPT versus the one who writes a full draft themselves and then asks an LLM for constructive feedback. One develops skills while the other atrophies.

        • amunozo 1 hour ago
          That's why it's so important to teach how to use them properly instead of demonizing them. Let's be realistic, they are not going to disappear and students and workers are not stopping using them.
      • bluGill 4 hours ago
        When in school the point is often to learn how to write complex code by writing things the standard library does.

        Though also in the 90's the standard library was new and often had bugs

    • robmccoll 5 hours ago
      Yes! You are best served by learning what a tool is doing for you by doing it yourself or carefully studying what it uses and obfuscates from you before using the tool. You don't need to construct an entire functioning processor in an HDL, but understanding the basics of digital logic and computer architecture matters if you're EE/CompE. You don't have to write an OS in asm, but understanding assembly and how it gets translated into binary and understanding the basics of resource management, IPC, file systems, etc. is essential if you will ever work in something lower level. If you're a CS major, algorithms and data structures are essential. If you're just learning front end development on your own or in a boot camp, you need to learn HTML and the DOM, events, how CSS works, and some of the core concepts of JS, not just React. You'll be better for it when the tools fail you or a new tool comes along.
    • byronic 2 hours ago
      I was so lucky to land in a CS class where we were writing C++ by hand. I don't think that exists anymore, but it is where I would go in terms of teaching CS from first principles
    • acessoproibido 4 hours ago
      I'm not so sure. I spent A LOT of time writing sorting algo code by hand in university. I spent so much time writing assembly code by hand. So much more time writing instructions for MIPS by hand. (To be fair I did study EE not CS)

      I learned more about programming in a weekend badly copying hack modules for Minecraft than I learned in 5+ years in university.

      All that stuff I did by hand back then I haven't used it a single time after.

      • brightball 4 hours ago
        I would interpret his take a little bit differently.

        You write sorting algorithms in college to understand how they work. Understand why they are faster because it teaches you a mental model for data traversal strategies. In the real world, you will use pre-written versions of those algorithms in any language but you understand them enough to know what to select in a given situation based on the type of data. This especially comes into play when creating indexes for databases.

        What I take the OPs statement to mean are around "meta" items revolved more around learning abstractions. You write certain patterns by hand enough times, you will see the overlap and opportunity to refactor or create an abstraction that can be used more effectively in your codebase.

        If you vibe code all of that stuff, you don't feel the repetition as much. You don't work through the abstractions and object relationships yourself to see the opportunity to understand why and how it could be improved.

      • phailhaus 4 hours ago
        You didn't write sorting code or assembly code because you were going to need to write it on the job. It gave you a grounding for how datastructures and computers work on a fundamental level. That intuition is what makes picking up minecraft hack mods much easier.
        • acessoproibido 4 hours ago
          That's the koolaid, but seriously I don't really believe it anymore.

          I only had to do this leg work during university to prove that I can be allowed to try and write code for a living. The grounding as you call it is not required for that at all,since im a dozen levels of abstraction removed from it. It might be useful if I was a researcher or would work on optimizing complex cutting edge stuff, but 99% of what I do is CRUD apps and REST Apis. That stuff can safely be done by anyone, no need for a degree. Tbf I'm from Germany so in other places they might allow you to do this job without a degree

    • mellosouls 4 hours ago
      Sure (knowing the underlying ideas and having proficiency in their application) - but producing software by conducting(?) LLMs is rapidly becoming a wide, deep and must-have skill and the lack thereof will be a weakness in any student entering the workplace.
    • FloorEgg 4 hours ago
      AI does have an incredibly powerful influence on learning. It can absolutely be used as a detriment, but it can also be just as powerful of a learning tool. It all comes down to keeping the student in the zone of proximal development.

      If AI is used by the student to get the task done as fast as possible the student will miss out on all the learning (too easy).

      If no AI is used at all, students can get stuck for long periods of time on either due to mismatches between instructional design and the specific learning context (missing prereq) or by mistakes in instructional design.

      AI has the potential to keep all learners within an ideal difficulty for optimal rate of learning so that students learn faster. We just shouldn't be using AI tools for productivity in the learning context, and we need more AI tools designed for optimizing learning ramps.

    • andrewflnr 4 hours ago
      Similarly, it's always been the case that copy-pasting code out of a tutorial doesn't teach you as much much as manually typing it out, even if you don't change it. That part of the problem isn't even new.
    • victorbjorklund 2 hours ago
      Yea, I doubt I could learn to program today if I started today.
    • bstar77 1 hour ago
      Completely disagree. It’s like telling typists that they need to hand write to truly understand their craft. Syntax is just a way of communicating a concept to the machine. We now have a new (and admitidly imperfect) way of doing that. New skills are going to be required. Computer science is going to have to adapt.
    • dfxm12 5 hours ago
      As a teacher, do you have any techniques to make sure students learn to write the code?
      • GoatInGrey 2 hours ago
        In-person analog checkpoints seem to be the most effective method. Think internet-disabled PCs managed by the school, written exams, oral exams, and so forth.

        Making students fix LLM-generated code until they're at their wits' end is a fun idea. Though it likely carries too high of an opportunity cost education-wise.

      • WalterBright 4 hours ago
        If I was a prof, I would make it clear to the students that they won't learn to program if they use AI to do it for them. For the students who wanted to learn, great! For those who just wanted to slide through with AI, I wouldn't care about them.
    • Quothling 6 hours ago
      I'm an external examiner for CS students in Denmark and I disagree with you. What we need in the industry is software engineers who can think for themselves, can interact with the business and understand it's needs, and, they need to know how computers work. What we get are mass produced coders who have been taught some outdated way of designing and building software that we need to hammer out of them. I don't particularily care if people can write code like they work at the assembly line. I care that they can identify bottlenecks and solve them. That they can deliver business value quickly. That they will know when to do abstractions (which is almost never). Hell, I'd even like developers who will know when the code quality doesn't matter because shitty code will cost $2 a year but every hour they spend on it is $100-200.

      Your curriculum may be different than it is around here, but here it's frankly the same stuff I was taught 30 years ago. Except most of the actual computer science parts are gone, replaced with even more OOP, design pattern bullshit.

      That being said. I have no idea how you'd actually go about teaching students CS these days, considering a lot of them will probably use ChatGPT or Claude regardless of what you do. That is what I see in the statistic for grades around here. For the first 9 years I was a well calibrated grader, but these past 1,5ish years it's usually either top marks or bottom marks with nothing in between. Which puts me outside where I should be, but it matches the statistical calibration for everyone here. I obviously only see the product of CS educations, but even though I'm old, I can imagine how many corners I would have cut myself if I had LLM's available back then. Not to mention all the distractions the internet has brought.

      • lucianbr 6 hours ago
        > I don't particularily care if people can write code like they work at the assembly line. I care [...] That they can deliver business value quickly.

        In my experience, people who talk about business value expect people to code like they work at the assembly line. Churn out features, no disturbances, no worrying about code quality, abstractions, bla bla.

        To me, your comment reads contradictory. You want initiative, and you also don't want initiative. I presume you want it when it's good and don't want it when it's bad, and if possible the people should be clairvoyant and see the future so they can tell which is which.

        • pitched 5 hours ago
          I think we very often confuse engineers with scientists in this field. Think of the old joke: “anyone can build a bridge, it takes an Engineer to build one that barely stands”. Business value and the goal of engineering is to make a bridge that is fast to build, cheap to make, and stays standing exactly as long as it needs to. This is very different from the goals of science which are to test the absolute limits of known performance.

          What I read from GP is that they’re looking for engineering innovation, not new science. I don’t see it as contradictory at all.

        • vidarh 4 hours ago
          You should worry about code quality, but you should also worry about the return on investment.

          That includes understanding risk management and knowing what the risks and costs are of failures vs. the costs of delivering higher quality.

          Engineering is about making the right tradeoffs given the constraints set, not about building the best possible product separate from the constraints.

          Sometimes those constraints requires extreme quality, because it includes things like "this should never, ever fail", but most of the time it does not.

        • Quothling 3 hours ago
          Some of our code is of high quality. Other can be of any quality as it'll never need to be altered in it's lifecycle. If we have 20000 financial reports which needs to be uploaded once, and then it'll never happen again, it really doesn't matter how terrible the code is as long as it only uses vetted external dependencies. The only reason you'd even use developer time on that task is because it's less errorprone than having student interns do it manually... I mean, I wish I could tell you it was to save them from a terrible task, but it'll solely be because of money.

          If it's firmware for a solar inverter in Poland, then quality matters.

        • stackedinserter 5 hours ago
          > people who talk about business value expect people to code like they work at the assembly line. Churn out features, no disturbances, no worrying about code quality, abstractions, bla bla.

          That's typical misconception that "I'm an artist, let me rewrite in Rust" people often have. Code quality has a direct money equivalent, you just need to be able to justify it for people that pay you salary.

      • halfmatthalfcat 6 hours ago
        Let them use AI and then fall on their faces during exam time - simple as that. If you can't recall the theory, paradigm, methodology, whatever by memory then you have not "mastered" the content and thus, should fail the class.
      • bambax 4 hours ago
        > That being said. I have no idea how you'd actually go about teaching students CS these days, considering a lot of them will probably use ChatGPT or Claude regardless of what you do.

        My son is in a CS school in France. They have finals with pen and paper, with no computer whatsoever during the exam; if they can't do that they fail. And these aren't multiple choice questions, but actual code that they have to write.

        • vidarh 4 hours ago
          I had to do that too, in Norway. Writing C++ code with pen and paper and being told even trivial syntax errors like missing semicolons would be penalised was not fun.

          This was 30 years ago, though - no idea what it is like now. It didn't feel very meaningful even then.

          But there's a vast chasm between that and letting people use AI in an exam setting. Some middle ground would be nice.

        • WalterBright 4 hours ago
          I wrote code in a spiral notebook because the mainframe was not available to me at home.
          • nosianu 3 hours ago
            (U880 - GDR Z80 8 bit CPU clone)

            I wrote assembler on pages of paper. Then I used tables, and a calculator for the two's-complement relative negative jumps, to manually translate it into hex code. Then I had software to type in such hex dumps and save them to audio cassette, from which I could then load them for execution.

            I did not have an assembler for my computer. I had a disassembler though- manually typed it in from a computer magazine hex dump, and saved it on an audio cassette. With the disassembler I could check if I had translated everything correctly into hex, including the relative jumps.

            The planning required to write programs on sheets of paper was very helpful. I felt I got a lot dumber once I had a PC and actual programmer software (e.g. Borland C++). I found I was sitting in front of an empty code file without a plan more often than not, and wrote code moment to moment, immediately compiling and test running.

            The AI coding may actually not be so bad if it encourages people to start with high-level planning instead of jumping into the IDE right away.

            • lucianbr 2 hours ago
              Real programmers just use a magnetized needle to flip bits on the HDD platter.
      • ativzzz 5 hours ago
        > That they will know when to do abstractions

        The only way to learn when abstractions are needed is to write code, hit a dead end, then try and abstract it. Over and over. With time, you will be able to start seeing these before you write code.

        AI does not do abstractions well. From my experience, it completely fails to abstract anything unless you tell it to. Even when similar abstractions are already present. If you never learn when an abstraction is needed, how can you guide an AI to do the same well?

      • candiddevmike 6 hours ago
        > I'm an external examiner for CS students

        > Hell, I'd even like developers who will know when the code quality doesn't matter because shitty code will cost $2 a year but every hour they spend on it is $100-200.

        > Except most of the actual computer science parts are gone, replaced with even more OOP, design pattern bullshit.

        Maybe you should consider a different career, you sound pretty burnt out. There are terrible takes, especially for someone who is supposed to be fostering the next generation of developers.

        • Quothling 3 hours ago
          I don't foster the next generations. I hire them. External examiners are people in the industry who are used as examiners to try and match educations with the needs of the industry.
        • pitched 5 hours ago
          It can take some people a few years to get over OOP, in the same way that some kids still believe in Santa a bit longer. Keep at it though and you’ll make it there eventually too.
      • kaydub 4 hours ago
        Ah, see, you're outside of the US.

        In the US education has been bastardized into "job training"

        Good workers don't really need to think in this paradigm.

      • SoftTalker 5 hours ago
        What is an "external examiner?"
        • Quothling 3 hours ago
          External examiners are people in the industry who are used as examiners to try and match educations with the needs of the industry.
        • halfmatthalfcat 5 hours ago
          A proctor?
  • GolDDranks 4 hours ago
    I feel like I'm taking crazy pills. The article starts with:

    > you give it a simple task. You’re impressed. So you give it a large task. You’re even more impressed.

    That has _never_ been the story for me. I've tried, and I've got some good pointers and hints where to go and what to try, a result of LLM's extensive if shallow reading, but in the sense of concrete problem solving or code/script writing, I'm _always_ disappointed. I've never gotten satisfactory code/script result from them without a tremendous amount of pushback, "do this part again with ...", do that, don't do that.

    Maybe I'm just a crank with too many preferences. But I hardly believe so. The minimum requirement should be for the code to work. It often doesn't. Feedback helps, right. But if you've got a problem where a simple, contained feedback loop isn't that easy to build, the only source of feedback is yourself. And that's when you are exposed to the stupidity of current AI models.

    • b33j0r 3 hours ago
      I usually do most of the engineering and it works great for writing the code. I’ll say:

      > There should be a TaskManager that stores Task objects in a sorted set, with the deadline as the sort key. There should be methods to add a task and pop the current top task. The TaskManager owns the memory when the Task is in the sorted set, and the caller to pop should own it after it is popped. To enforce this, the caller to pop must pass in an allocator and will receive a copy of the Task. The Task will be freed from the sorted set after the pop.

      > The payload of the Task should be an object carrying a pointer to a context and a pointer to a function that takes this context as an argument.

      > Update the tests and make sure they pass before completing. The test scenarios should relate to the use-case domain of this project, which is home automation (see the readme and nearby tests).

      • gedy 3 hours ago
        What you’re describing makes sense, but that type of prompting is not what people are hyping
        • ljm 3 hours ago
          The more accurate prompt would be “You are a mind reader. Create me a plan to create a task manager, define the requirements, deploy it, and tell me when it’s done.”

          And then you just rm -rf and repeat until something half works.

        • Leherenn 2 hours ago
          I haven't tried it, but someone at work suggested using voice input for this because it's so much easier to add details and constraints. I can certainly believe it, but I hate voice interfaces, especially if I'm in an open space setting.

          You don't even have to be as organised as in the example, LLMs are pretty good at making something out of ramblings.

      • apercu 3 hours ago
        This is similar to how I prompt, except I start with a text file and design the solution and paste it in to an LLM after I have read it a few times. Otherwise, if I type directly in to the LLM and make a mistake it tends to come back and haunt me later.
    • threethirtytwo 2 hours ago
      I think it’s usage patterns. It is you in a sense.

      You can’t deny the fact that someone like Ryan dhal creator of nodejs declared that he no longer writes code is objectively contrary to your own experience. Something is different.

      I think you and other deniers try one prompt and then they see the issues and stop.

      Programming with AI is like tutoring a child. You teach the child, tell it where it made mistakes and you keep iterating and monitoring the child until it makes what you want. The first output is almost always not what you want. It is the feedback loop between you and the AI that cohesively creates something better than each individual aspect of the human-AI partnership.

      • GorbachevyChase 2 hours ago
        My personal suspicion is that the detractors value process and implementation details much more highly than results. That would not surprise me if you come from a business that is paid for its labor inputs and is focused on keeping a large team billable for as long as possible. But I think hackers and garage coders see the value of “vibing” as they are more likely to be the type of people who just want results and view all effort as margin erosion rather than the goal unto itself.

        The only thing I would change about what you said is, I don’t see it as a child that needs tutoring. It feels like I’m outsourcing development to an offshore consultancy where we have no common understanding, except the literal meaning of words. I find that there are very, very many problems that are suited well enough to this arrangement.

      • CivBase 2 hours ago
        > Programming with AI is like tutoring a child. You teach the child, tell it where it made mistakes and you keep iterating and monitoring the child until it makes what you want.

        Who are you people who spend so much time writing code that this is a significant productivity boost?

        I'm imagining doing this with an actual child and how long it would take for me to get a real return on investment at my job. Nevermind that the limited amount of time I get to spend writing code is probably the highlight of my job and I'd be effectively replacing that with more code reviews.

        • dimitri-vs 1 hour ago
          A better way to put it is with this example: I put my symptoms into ChatGPT and it gives some generic info with a massive "not-medical-advice" boilerplate and refuses to give specific recommendations. My wife (an NP) puts in anonymous medical questions and gets highly specific med terminology heavy guidance.

          That's all to say the learning curve with LLMs is how to say things a specific way to reliability get an outcome.

        • threethirtytwo 1 hour ago
          it's not just writing code.

          And maybe child is too simplistic of an analogy. It's more like working with a savant.

          The type of thing you can tell AI to do is like this: You tell it to code a website... it does it, but you don't like the pattern.

          Say, "use functional programming", "use camel-case" don't use this pattern, don't use that. And then it does it. You can leave it in the agent file and those instructions become burned into it forever.

        • shimman 1 hour ago
          These people are just the same charlatans and scammers you saw in the web3 sphere. Invoking Ryan Dahl as some sort of authority figure and not a tragic figure that sold his soul to VC companies is even more pathetic.
          • threethirtytwo 1 hour ago
            Don't appreciate this comment. Calling me a charlatan is rude. He's not authority, but he has more credibility than you and most people on HN.

            There is obvious division of ideas here. But calling one side stupid or referring to them as charlatans is outright wrong and biased.

            • shimman 46 minutes ago
              No one called YOU a charlatan, get thicker skin because you are going to run into more and more people that absolutely hate these tools.

              There is a reason why they struggle selling them and executives are force feeding them to their workers.

              Charlatan is the perfect term for those that stand to make money selling half baked goods and forcing more mass misery upon society.

    • giancarlostoro 17 minutes ago
      The secret sauce for me is Beads. Once Beads is setup you make the tasks and refine them and by the end each task is a very detailed prompt. I have Claude ask me clarifying questions, do research for best practices etc

      Because of Beads I can have Claude do a code review for serious bugs and issues and sure enough it finds some interesting things I overlooked.

      I have also seen my peers in the reverse engineering field make breakthroughs emulating runtimes that have no or limited existing runtimes, all from the ground up mind you.

      I think the key is thinking of yourself as an architect / mentor for a capable and promising Junior developer.

    • jasondigitized 3 hours ago
      I feel like I am taking crazy pills. I am getting code that works from Opus 4.5. It seems like people are living in two separate worlds.
      • ruszki 2 hours ago
        Working code doesn’t mean the same for everyone. My coworker just started vibe coding. Her code works… on happy paths. It absolutely doesn’t work when any kind of error happens. It’s also absolutely impossible to refactor it in any way. She thinks her code works.

        The same coworker asked to update a service to Spring Boot 4. She made a blog post about. She used LLM for it. So far every point which I read was a lie, and her workarounds make, for example tests, unnecessarily less readable.

        So yeah, “it works”, until it doesn’t, and when it hits you, that you need to work more in sum at the end, because there are more obscure bugs, and fixing those are more difficult because of terrible readability.

      • WarmWash 2 hours ago
        I can't help but think of my earliest days of coding, 20ish years ago, when I would post my code online looking for help on a small thing, and being told that my code is garbage and doesn't work at all even if it actually is working.

        There are many ways to skin a cat, and in programming the happens-in-a-digital-space aspect removes seemingly all boundaries, leading to fractal ways to "skin a cat".

        A lot of programmers have hard heads and know the right way to do something. These are the same guys who criticized every other senior dev as being a bad/weak coder long before LLMs were around.

      • crystal_revenge 2 hours ago
        Parent's profile shows that they are an experienced software engineer in multiple areas of software development.

        Your own profile says you are a PM whose software skills amount to "Script kiddie at best but love hacking things together."

        It seems like the "separate worlds" you are describing is the impression of reviewing the code base from a seasoned engineer vs an amateur. It shouldn't be even a little surprising that your impression of the result is that the code is much better looking than the impression of a more experienced developer.

        At least in my experience, learning to quickly read a code base is one of the later skills a software engineer develops. Generally only very experienced engineers can dive into an open source code base to answer questions about how the library works and is used (typically, most engineers need documentation to aid them in this process).

        I mean, I've dabbled in home plumbing quite a bit, but if AI instructed me to repair my pipes and I thought it "looked great!" but an experienced plumber's response was "ugh, this doesn't look good to me, lots of issues here" I wouldn't argue there are "two separate worlds".

        • ModernMech 33 minutes ago
          > It shouldn't be even a little surprising that your impression of the result is that the code is much better looking than the impression of a more experienced developer.

          This really is it: AI produces bad to mediocre code. To someone who produces terrible code mediocre is an upgrade, but to someone who produces good to excellent code, mediocre is a downgrade.

      • HarHarVeryFunny 1 hour ago
        That is such as vague claim, that there is no contradiction.

        Getting code to do exactly what, based on using and prompting Opus in what way?

        Of course it works well for some things.

      • GoatInGrey 2 hours ago
        That's a significant rub with LLMs, particularly hosted ones: the variability. Add in quantization, speculative decoding, and dynamic adjustment of temperature, nucleus sampling, attention head count, & skipped layers at runtime, and you can get wildly different behaviors with even the same prompt and context sent to the same model endpoint a couple hours apart.

        That's all before you even get to all of the other quirks with LLMs.

      • zeroCalories 3 hours ago
        It depends heavily on the scope and type of problem. If you're putting together a standard isolated TypeScript app from scratch it can do wonders, but many large systems are spread between multiple services, use abstractions unique to the project, and are generally dealing with far stricter requirements. I couldn't depend on Claude to do some of the stuff I'd really want, like refactor the shared code between six massive files without breaking tests. The space I can still have it work productively in is still fairly limited.
    • ActorNightly 2 hours ago
      Its really becoming a good litmus test for how someones coding ability whether they think LLMS can do well on complex tasks.

      For example, someone may ask an LLM to write a simple http web server, and it can do that fine, and they consider that complex, when in reality its really not.

      • threethirtytwo 2 hours ago
        It’s not. There are tons of great programmers, that are big names in the industry who now exclusively vibe code. Many of these names are obviously intelligent and great programmers.

        This is an extremely false statement.

        • HarHarVeryFunny 1 hour ago
          People use "vibe coding" to mean different things - some mean the original Karpathy "look ma, no hands!", feel the vibez, thing, and some just (confusingly) use "vibe coding" to refer to any use of AI to write code, including treating it as a tool to write small well-defined parts that you have specified, as opposed to treating it as a magic genie.

          There also seem to be people hearing big names like Karpathy and Linus Torvalds say they are vibe coding on their hobby projects, meaning who knows what, and misunderstanding this as being an endorsement of "magic genie" creation of professional quality software.

          Results of course also vary according to how well what you are asking the AI to do matches what it was trained on. Despite sometimes feeling like it, it is not a magic genie - it is a predictor that is essentially trying to best match your input prompt (maybe a program specification) to pieces of what it was trained on. If there is no good match, then it'll have a go anyway, and this is where things tend to fall apart.

          • dudeinhawaii 1 hour ago
            Funny, the last interview I watched with Karpathy he highlighted the way the AI/LLM was unable to think in a way that aligned with his codebase. He described vibe-coding a transition from Python to Rust but specifically called out that he hand-coded all of the python code due to weaknesses in LLM's ability to handle performant code. I'm pretty sure this was the last Dwarkesh interview with "LLMs as ghosts".
            • HarHarVeryFunny 1 hour ago
              Right, and he also very recently said that he felt essentially left behind by AI coding advances, thinking that his productivity could be 10x if he knew how to use it better.

              It seems clear that Karpathy himself is well aware of the difference between "vibe coding" as he defined it (which he explicitly said was for playing with on hobby projects), and more controlled productive use of AI for coding, which has either eluded him, or maybe his expectations are too high and (although it would be surprising) he has not realized the difference between the types of application where people are finding it useful, and use cases like his own that do not play to its strength.

          • threethirtytwo 1 hour ago
            karpathy is biased. I wouldn't use his name as he's behind the whole vibe coding movement.

            You have to pick people with nothing to gain. https://x.com/rough__sea/status/2013280952370573666

            • HarHarVeryFunny 58 minutes ago
              I don't think he meant to start a movement - it was more of a throw-away tweet that people took way too seriously, although maybe with his bully pulpit he should have realized that would happen.
    • jjice 3 hours ago
      I've found that the thing that made is really click for me was having reusable rules (each agent accepts these differently) that help tell it patterns and structure you want.

      I have ones that describe what kinds of functions get unit vs integration tests, how to structure them, and the general kinds of test cases to check for (they love writing way too many tests IME). It has reduced the back and forth I have with the LLM telling it to correct something.

      Usually the first time it does something I don't like, I have it correct it. Once it's in a satisfactory state, I tell it to write a Cursor rule describing the situation BRIEFLY (it gets way to verbose by default) and how to structure things.

      That has made writing LLM code so much more enjoyable for me.

    • causalscience 9 minutes ago
      You're not crazy, I'm also always disappointed.

      My theory is that the people who are impressed are trying to build CRUD apps or something like that.

    • dev_l1x_be 4 hours ago
      Well one way of solving this is to keep giving it simple tasks.
      • GoatInGrey 2 hours ago
        The other side of this coin are the non-developer stakeholders who Dunning-Kruger themselves into firm conclusions on technical subjects with LLMs. "Well I can code this up in an hour, two max. Why is it taking you ten hours?". I've (anecdotally) even had project sponsors approach me with an LLM's judgement on their working relationship with me as if it were gospel like "It said that we aren't on the same page. We need to get aligned." It gets weird.

        These cases are common enough to where it's more systemic than isolated.

      • hmaxwell 3 hours ago
        Exactly 100%

        I read these comments and articles and feel like I am completely disconnected from most people here. Why not use GenAI the way it actually works best: like autocomplete on steroids. You stay the architect, and you have it write code function by function. Don't show up in Claude Code or Codex asking it to "please write me GTA 6 with no mistakes or you go to jail, please."

        It feels like a lot of people are using GenAI wrong.

        • latexr 2 hours ago
          > It feels like a lot of people are using GenAI wrong.

          That argument doesn’t fly when the sellers of the technology literally sing at you “there’s no wrong way to prompt”.

          https://youtu.be/9bBfYX8X5aU?t=48

    • nozzlegear 2 hours ago
      You're not taking crazy pills, this is my exact experience too. I've been using my wife's eCommerce shop (a headless Medusa instance, which has pretty good docs and even their own documentation LLM) as a 100% vibe-coded project using Claude Code, and it has been one comedy of errors after another. I can't tell you how many times I've had it go through the loop of Cart + Payment Collection link is broken -> Redeploy -> Webhook is broken (can't find payment collection) -> Redeploy -> Cart + Payment Collection link is broken -> Repeat. And it never seems to remember the reasons it had done something previously – despite it being plastered 8000 times across the CLAUDE.md file – so it bumbles into the same fuckups over and over again.

      A complete exercise in frustration that has turned me off of all agentic code bullshit. The only reason I still have Claude Code installed is because I like the `/multi-commit` skill I made.

    • SCdF 3 hours ago
      I am getting workable code with Claude on a 10kloc Typescript project. I ask it to make plans then execute them step by step. I have yet to try something larger, or something more obscure.
      • brabel 3 hours ago
        Most agents do that by default now.
        • GoatInGrey 2 hours ago
          I feel like there is a nuance here. I use GitHub Copilot and Claude Code, and unless I tell it to not do anything, or explicitly enable a plan mode, the LLM will usually jump straight to file edits. This happens even if I prompt it with something as simple as "Remind me how loop variable scoping works in this language?".
      • jasondigitized 3 hours ago
        This. I feel like folks are living in two separate worlds. You need to narrow the aperture and take the LLm through discrete steps. Are people just saying it doesn't work because they are pointing it at 1m loc monoliths and trying to oneshot a giant epic?
        • nh23423fefe 2 hours ago
          AI was useless for me on a refactor of a repo 20k loc even after I gave examples of the migrations I wanted in commits.

          It would correctly modify a single method. I would ask it to repeat for next and it would fail.

          The code that our contractors are submitting is trash and very high loc. When you inspect it you can see that unit tests are testing nothing of value.

             when(mock.method(foo)).thenReturn(bar)
             assert(bar == bar)
          
          stuff like that

          its all fake coverage, for fake tests, for fake OKRs

          what are people actually getting done? I've sat next to our top evangelist for 30 minutes pair programming and he just fought the tool saying something was wrong with the db while showing off some UI I dont care about.

          like that seems to be the real issue to me. i never bother wasting time with UI and just write a tool to get something done. but people seem impressed that AI did some shitty data binding to a data model that cant do anything, but its pretty.

          it feels weird being an avowed singularitarian but adamant that these tools suck now.

        • echelon 2 hours ago
          I'm using Claude in a giant Rust monorepo. It's really good at implementing HTTP handlers and threaded workers when I point it at prior examples.
    • __grob 2 hours ago
      It still amazes me that so many people can see LLMs writing code as anything less than a miracle in computing...
    • echohack5 3 hours ago
      I have found AI great in alot of scenarios but If I have a specific workflow, then the answer is specific and the ai will get it wrong 100% of the time. You have a great point here.

      A trivial example is your happy path git workflow. I want:

      - pull main

      - make new branch in user/feature format

      - Commit, always sign with my ssh key

      - push

      - open pr

      but it always will

      - not sign commits

      - not pull main

      - not know to rebase if changes are in flight

      - make a million unnecessary commits

      - not squash when making a million unnecessary commits

      - have no guardrails when pushing to main (oops!)

      - add too many comments

      - commit message too long

      - spam the pr comment with hallucinated test plans

      - incorrectly attribute itself as coauthor in some gorilla marketing effort (fixable with config, but whyyyyyy -- also this isn't just annoying, it breaks compliance in alot of places and fundamentally misunderstands the whole point of authorship, which is copyright --- and AIs can't own copyright )

      - not make DCO compliant commits ...

      Commit spam is particularly bad for bisect bug hunting and ref performance issues at scale. Sure I can enforce Squash and Merge on my repo but why am I relying on that if the AI is so smart?

      All of these things are fixed with aliases / magit / cli usage, using the thing the way we have always done it.

      • ikrenji 3 hours ago
        Is commit history that useful? I never wanted to look up anything in it that couldn't be solved with git log | grep xyz...
      • furyofantares 3 hours ago
        > why am I relying on that if the AI is so smart?

        Because it's not? I use these things very extensively to great effect, and the idea that you'd think of it as "smart" is alien to me, and seems like it would hurt your ability to get much out of them.

        Like, they're superhuman at breadth and speed and some other properties, but they don't make good decisions.

    • GolDDranks 4 hours ago
      Just a supplementary fact: I'm in the beneficial position, against the AI, that in a case where it's hard to provide that automatic feedback loop, I can run and test the code at my discretion, whereas the AI model can't.

      Yet. Most of my criticism is not after running the code, but after _reading_ the code. It wrote code. I read it. And I am not happy with it. No even need to run it, it's shit at glance.

      • elevation 3 hours ago
        Yesterday I generated a for-home-use-only PHP app over the weekend with a popular cli LLM product. The app met all my requirements, but the generated code was mixed. It correctly used a prepared query to avoid SQL injection. But then, instead of an obvious:

            "SELECT * FROM table WHERE id=1;" 
        
        it gave me:

            $result = $db->query("SELECT * FROM table;");
            for ($row in $result)
                if ($["id"] == 1)
                    return $row;
        
        
        With additional prompting I arrived at code I was comfortable deploying, but this kind of flaw cuts into the total time-savings.
      • ReverseCold 3 hours ago
        > I can run and test the code at my discretion, whereas the AI model can't.

        It sounds like you know what the problem with your AI workflow is? Have you tried using an agent? (sorry somewhat snarky but… come on)

        • GolDDranks 3 hours ago
          Yeah, you're right, and the snark might be warranted. I should consider it the same as my stupid (but cute) robot vacuum cleaner that goes at random directions but gets the job done.

          The thing that differentiates LLM's from my stupid but cute vacuum cleaner, is that the (at least OpenAI's) AI model is cocksure and wrong, which is infinitely more infuriating than being a bit clueless and wrong.

          • storystarling 3 hours ago
            I've been trying to solve this by wrapping the generation in a LangGraph loop. The hope was that an agent could catch the errors, but it seems to just compound the problem. You end up paying for ten API calls where the model confidently doubles down on the mistake, which gets expensive very quickly for no real gain.
          • yaur 3 hours ago
            Give Cluade Code a go. It still makes a lot stupid mistakes, but its a vastly different experience from pasting back and forth with chat gpt.
            • tayo42 3 hours ago
              There's no free trial or anything?
              • yaur 2 hours ago
                You can play with the model for free in chat... but if $20 for a coding agent isn't effectively free for use case it might not be the right tool for you.

                ETA: I've probably gotten 10k worth of junior dev time out of it this month.

                • tayo42 2 hours ago
                  The chat is limited and doesn't let you use the latest model. if that's representative of the answers I would get by paying, it doesn't seem worth it.

                  Im not crazy about signing up for a subscription service, it depends on you remembering to cancel and not have a headache when you do cancel.

      • __MatrixMan__ 3 hours ago
        You might get better code out of it if you give the AI some more restrictive handcuffs. Spin up a tester instance and have it tell the developer instance to try again until it's happy with the quality.
    • t55 3 hours ago
      [flagged]
      • GolDDranks 3 hours ago
        I don't love these kinds of throwaway comments without any substance, but...

        "It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It"

        ...might be my issue indeed. Trying to balance it by not being too stubborn though. I'm not doing AI just to be able to dump on them, you know.

        • ahelwer 3 hours ago
          An alternative reading of these comments is "I went to the casino and had a great time! Don't understand how you could have lost money."
        • antonvs 3 hours ago
          Skill comes from experience. It takes a good amount of working with these models to learn how to use them effectively, when to use them, and what to use them for. Otherwise, you end up hitting their limitations over and over and they just seem useless.

          They're certainly not perfect, but many of the issues that people post about as though they're show-stoppers are easily resolved with the right tools and prompting.

          • BAM-DevCrew 3 hours ago
            20% tools, 40% prompt, 40% claude.md (agents.md) = 98% success most of the time. A few errors to correct is not the end of the world.
            • antonvs 2 hours ago
              Right. But "prompt" also covers a lot of ground, e.g. planning, tracking tasks, etc. The codex-style frameworks do a good amount of that for you, but it can still make a big difference to structure what you're asking the model to do and let it execute step by step.

              A lot of the failures people talk about seem to involve expecting the models to one-shot fairly complex requirements.

  • rich_sasha 6 hours ago
    I came to "vibe coding" with an open mind, but I'm slowly edging in the same direction.

    It is hands down good for code which is laborious or tedious to write, but once done, obviously correct or incorrect (with low effort inspection). Tests help but only if the code comes out nicely structured.

    I made plenty of tools like this, a replacement REPL for MS-SQL, a caching tool in Python, a matplotlib helper. Things that I know 90% how to write anyway but don't have the time, but once in front of me, obviously correct or incorrect. NP code I suppose.

    But business critical stuff is rarely like this, for me anyway. It is complex, has to deal with various subtle edge cases, be written defensively (so it fails predictably and gracefully), well structured etc. and try as I might, I can't get Claude to write stuff that's up to scratch in this department.

    I'll give it instructions on how to write some specific function, it will write this code but not use it, and use something else instead. It will pepper the code with rookie mistakes like writing the same logic N times in different places instead of factoring it out. It will miss key parts of the spec and insist it did it, or tell me "Yea you are right! Let me rewrite it" and not actually fix the issue.

    I also have a sense that it got a lot dumber over time. My expectations may have changed of course too, but still. I suspect even within a model, there is some variability of how much compute is used (eg how deep the beam search is) and supply/demand means this knob is continuously tuned down.

    I still try to use Claude for tasks like this, but increasingly find my hit rate so low that the whole "don't write any code yet, let's build a spec" exercise is a waste of time.

    I still find Claude good as a rubber duck or to discuss design or errors - a better Stack Exchange.

    But you can't split your software spec into a set of SE questions then paste the code from top answers.

    • tomtomtom777 4 hours ago
      I agree, and like to add,

      > It is hands down good for code which is laborious or tedious to write, but once done, obviously correct or incorrect (with low effort inspection).

      The problem here is, that it fills in gaps that shouldn't be there in the first place. Good code isn't laborious. Good code is small. We learn to avoid unnecessary abstractions. We learn to minimize "plumbing" such that the resulting code contains little more than clear and readable instructions of what you intend for the computer to do.

      The perfect code is just as clear as the design document in describing the intentions, only using a computer language.

      If someone is gaining super speeds by providing AI clear design documents compared to coding themselves, maybe they aren't coding the way they should.

      • Verdex 3 hours ago
        The quote that I heard (I think on HN) was, "If we had AIs to write XML for us then we never would have invented json."

        My biggest LLM success resulted in something operationally correct but was something that I would never want to try to modify. The LLM also had an increasingly difficult time adding features.

        Meanwhile my biggest 'manual' successes have resulted in something that was operationally correct, quick to modify, and refuses to compile if you mess anything up.

        • abrahms 1 hour ago
          This doesn't sound correct. We have computers write binary for us. We still make protocols which are optimizations for binary representation.. not because it's a pain to write.. but because there's some second order effect that we care about (storage / transfer costs, etc).
        • zephen 3 hours ago
          And a recent HN article had a bunch of comments lamenting that nobody ever uses XML any more, and talking about how much better it was than things like JSON.

          The only thing I think I learned from some of those exchanges was that xslt adherents are approximately as vocal as lisp adherents.

          • ern_ave 39 minutes ago
            > a recent HN article had a bunch of comments lamenting that nobody ever uses XML any more

            I still use it from time to time for config files that a developer has to write. I find it easier to read that JSON, and it supports comments. Also, the distinction between attributes and children is often really nice to have. You can shoehorn that into JSON of course, but native XML does it better.

            Obviously, I would never use it for data interchange (e.g. SOAP) anymore.

      • rich_sasha 3 hours ago
        Dunno. GUI / TUI code? "Here's a function that serialises object X to CSV, make a (de)serialiser to SQLite with tests". "And now to MS-SQL" pretty please".

        I don't how much scope realistically there is for writing these kinds of code nicely.

    • nonethewiser 4 hours ago
      The hardest part of coding has never been coding. It's been translating new business requirements into a specific implementation plan that works. Understanding what needs to be done, how things are currently working, and how to go from A to B.

      You can't dispense with yourself in those scenarios. You have to read, think, investigate, break things down into smaller problems. But I employ LLM's to help with that all the time.

      Granted, that's not vibe coding at all. So I guess we are pretty much in agreement up to this point. Except I still think LLMs speed up this process significantly, and the models and tools are only going to get better.

      Also, there are a lot of developers that are just handed the implementation plan.

  • simonw 6 hours ago
    > Not only does an agent not have the ability to evolve a specification over a multi-week period as it builds out its lower components, it also makes decisions upfront that it later doesn’t deviate from.

    That's your job.

    The great thing about coding agents is that you can tell them "change of design: all API interactions need to go through a new single class that does authentication and retries and rate-limit throttling" and... they'll track down dozens or even hundreds of places that need updating and fix them all.

    (And the automated test suite will help them confirm that the refactoring worked properly, because naturally you had them construct an automated test suite when they built those original features, right?)

    Going back to typing all of the code yourself (my interpretation of "writing by hand") because you don't have the agent-managerial skills to tell the coding agents how to clean up the mess they made feels short-sighted to me.

    • disgruntledphd2 6 hours ago
      > (And the automated test suite will help them confirm that the refactoring worked properly, because naturally you had them construct an automated test suite when they built those original features, right?)

      I dunno, maybe I have high standards but I generally find that the test suites generated by LLMs are both over and under determined. Over-determined in the sense that some of the tests are focused on implementation details, and under-determined in the sense that they don't test the conceptual things that a human might.

      That being said, I've come across loads of human written tests that are very similar, so I can see where the agents are coming from.

      You often mention that this is why you are getting good results from LLMs so it would be great if you could expand on how you do this at some point in the future.

      • simonw 6 hours ago
        I work in Python which helps a lot because there are a TON of good examples of pytest tests floating around in the training data, including things like usage of fixture libraries for mocking external HTTP APIs and snapshot testing and other neat patterns.

        Or I can say "use pytest-httpx to mock the endpoints" and Claude knows what I mean.

        Keeping an eye on the tests is important. The most common anti-pattern I see is large amounts of duplicated test setup code - which isn't a huge deal, I'm much more more tolerant of duplicated logic in tests than I am in implementation, but it's still worth pushing back on.

        "Refactor those tests to use pytest.mark.parametrize" and "extract the common setup into a pytest fixture" work really well there.

        Generally though the best way to get good tests out of a coding agent is to make sure it's working in a project with an existing test suite that uses good patterns. Coding agents pick the existing patterns up without needing any extra prompting at all.

        I find that once a project has clean basic tests the new tests added by the agents tend to match them in quality. It's similar to how working on large projects with a team of other developers work - keeping the code clean means when people look for examples of how to write a test they'll be pointed in the right direction.

        One last tip I use a lot is this:

          Clone datasette/datasette-enrichments
          from GitHub to /tmp and imitate the
          testing patterns it uses
        
        I do this all the time with different existing projects I've written - the quickest way to show an agent how you like something to be done is to have it look at an example.
        • disgruntledphd2 5 hours ago
          > Generally though the best way to get good tests out of a coding agent is to make sure it's working in a project with an existing test suite that uses good patterns. Coding agents pick the existing patterns up without needing any extra prompting at all.

          Yeah, this is where I too have seen better results. The worse ones have been in places where it was greenfield and I didn't have an amazing idea of how to write tests (a data person working on a django app).

          Thanks for the information, that's super helpful!

        • thunspa 4 hours ago
          I work in Python as well and find Claude quite poor at writing proper tests, might be using it wrong. Just last week, I asked Opus to create a small integration test (with pre-existing examples) and it tried to create a 200-line file with 20 tests I didn't ask for.

          I am not sure why, but it kept trying to do that, although I made several attempts.

          Ended up writing it on my own, very odd. This was in Cursor, however.

      • jihadjihad 6 hours ago
        In my experience asking the model to construct an automated test suite, with no additional context, is asking for a bad time. You'll see tests for a custom exception class that you (or the LLM) wrote that check that the message argument can be overwritten by the caller, or that a class responds to a certain method, or some other pointless and/or tautological test.

        If you start with an example file of tests that follow a pattern you like, along with the code the tests are for, it's pretty good at following along. Even adding a sentence to the prompt about avoiding tautological tests and focusing on the seams of functions/objects/whatever (integration tests) can get you pretty far to a solid test suite.

        • kaydub 4 hours ago
          1 agent writes the tests, threads the needle.

          Another agent reviews the tests, finds duplicate code, finds poor testing patterns, looks for tests that are only following the "happy path", ensures logic is actually tested and that you're not wasting time testing things like getters and setters. That agent writes up a report.

          Give that report back to the agent that wrote the test or spin up a new agent and feed the report to it.

          Don't do all of this blindly, actually read the report to make sure the llm is on the right path. Repeat that one or two times.

        • matltc 2 hours ago
          Yeah I've seen this too. Bangs out five hundred line unit test file, but half of them are as you describe.

          Just writing one line in CLAUDE.md or similar saying "don't test library code; assume it is covered" works.

          Half the battle with this stuff is realizing that these agents are VERY literal. The other half is paring down your spec/token usage without sacrificing clarity.

      • kaydub 4 hours ago
        Once the agent writes your tests, have another agent review them and ask that agent to look for pointless tests, to make sure testing is around more than just the "happy path", etc. etc.

        Just like anything else in software, you have to iterate. The first pass is just to thread the needle.

      • wvenable 3 hours ago
        > I dunno, maybe I have high standards

        I don't get it. I have insanely high standards so I don't let the LLM get away with not meeting my standards. Simple.

      • archagon 1 hour ago
        I get the sense that many programmers resent writing tests and see them as a checkbox item or even boilerplate, not a core part of their codebase. Writing great tests takes a lot of thought about the myriad of bizarre and interesting ways your code will run. I can’t imagine that prompting an LLM to “write tests for this code” will result in anything but the most trivial of smoke test suites.

        Incidentally, I wonder if anyone has used LLMs to generate complex test scenarios described in prose, e.g. “write a test where thread 1 calls foo, then before hitting block X, thread 2 calls bar, then foo returns, then bar returns” or "write a test where the first network call Framework.foo makes returns response X, but the second call returns error Y, and ensure the daemon runs the appropriate mitigation code and clears/updates database state." How would they perform in this scenario? Would they add the appropriate shims, semaphores, test injection points, etc.?

      • touristtam 5 hours ago
        Embrace TDD? Write those tests and tell the agent to write the subject under test?
        • 0xffff2 2 hours ago
          Different strokes for different folks and all, but that sounds like automating all of the fun parts and doing all of the drudgery by hand. If the LLM is going to write anything, I'd much rather make it write the tests and do the implementation myself.
          • yakshaving_jgt 9 minutes ago
            This is a serious problem with professional software development — programmers see testing as a chore, and self-indulge in the implementation.
    • asadjb 4 hours ago
      Unfortunately I have started to feel that using AI to code - even with a well designed spec, ends up with code that; in the authors words, looks like

      > [Agents write] units of changes that look good in isolation.

      I have only been using agents for coding end-to-end for a few months now, but I think I've started to realise why the output doesn't feel that great to me.

      Like you said; "it's my job" to create a well designed code base.

      Without writing the code myself however, without feeling the rough edges of the abstractions I've written, without getting a sense of how things should change to make the code better architected, I just don't know how to make it better.

      I've always worked in smaller increments, creating the small piece I know I need and then building on top of that. That process highlights the rough edges, the inconsistent abstractions, and that leads to a better codebase.

      AI (it seems) decides on a direction and then writes 100s of LOC at one. It doesn't need to build abstractions because it can write the same piece of code a thousand times without caring.

      I write one function at a time, and as soon I try to use it in a different context I realise a better abstraction. The AI just writes another function with 90% similar code.

      • WorldMaker 4 hours ago
        The old classic mantra is "work smarter, not harder". LLMs are perfect for "work harder". They can produce bulk numbers of lines. They can help you brute force a problem space with more lines of code.

        We expect the spec writing and prompt management to cover the "work smarter" bases, but part of the work smarter "loop" is hitting those points where "work harder" is about to happen, where you know you could solve a problem with 100s or 1000s of lines of code, pausing for a bit, and finding the smarter path/the shortcut/the better abstraction.

        I've yet to see an "agentic loop" that works half as well as my well trained "work smarter loop" and my very human reaction to those points in time of "yeah, I simply don't want to work harder here and I don't think I need hundreds more lines of code to handle this thing, there has to be something smarter I can do".

        In my opinion, the "best" PRs delete as much or more code than they add. In the cleanest LLM created PRs I've never seen an LLM propose a true removal that wasn't just "this code wasn't working according to the tests so I deleted the tests and the code" level mistakes.

        • AstroBen 4 hours ago
          The used to be a saying of "the best programmers are lazy" - I think the opposite is now true
      • acessoproibido 4 hours ago
        I don't see why you can't use your approach of writing one function at a time, making it work in the context and then moving on with AI. Sure you can't tell it to do all that in one step but personally I really like not dealing with the boilerplate stuff and worrying more about the context and how to use my existing functions in different places
    • pgwhalen 6 hours ago
      > Going back to typing all of the code yourself (my interpretation of "writing by hand") because you don't have the agent-managerial skills to tell the coding agents how to clean up the mess they made feels short-sighted to me.

      I increasingly feel a sort of "guilt" when going back and forth between agent-coding and writing it myself. When the agent didn't structure the code the way I wanted, or it just needs overall cleanup, my frustration will get the best of me and I will spend too much time writing code manually or refactoring using traditional tools (IntelliJ). It's clear to me that with current tooling some of this type of work is still necessary, but I'm trying to check myself about whether a certain task really requires my manual intervention, or whether the agent could manage it faster.

      Knowing how to manage this back and forth reinforces a view I've seen you espouse: we have to practice and really understand agentic coding tools to get good at working with them, and it's a complete error to just complain and wait until they get "good enough" - they're already really good right now if you know how to manage them.

    • skerit 6 hours ago
      The article said:

      > So I’m back to writing by hand for most things. Amazingly, I’m faster, more accurate, more creative, more productive, and more efficient than AI, when you price everything in, and not just code tokens per hour

      At least he said "most things". I also did "most things" by hand, until Opus 4.5 came out. Now it's doing things in hours I would have worked an entire week on. But it's not a prompt-and-forget kind of thing, it needs hand holding.

      Also, I have no idea _what_ agent he was using. OpenAI, Gemini, Claude, something local? And with a subscription, or paying by the token?

      Because the way I'm using it, this only pays off because it's the 200$ Claude Max subscription. If I had to pay for the token (which once again: are hugely marked up), I would have been bankrupt.

      • kaydub 4 hours ago
        The article and video just feels like another dev poo-pooing LLMs.

        "vibe coding" didn't really become real until 2025, so how were they vibe coding for 2 years? 2 years ago I couldn't count on an llm to output JSON consistently.

        Overall the article/video are SUPER ambiguous and frankly worthless.

        • yojat661 3 hours ago
          Cursor and gpt 4 have been a thing from 2023. So, no, vibe coding didn't become real just last year.
        • 9rx 3 hours ago
          I successfully vibe coded an app in 2023, soon after VS Code Copilot added the chat feature, although we obviously didn't call it that back then.

          I remember being amazed and at the time thinking the game had changed. But I've never been able to replicate it since. Even the latest and greatest models seem to always go off and do something stupid that it can't figure out how to recover from without some serious handholding and critique.

          LLMs are basically slot machines, though, so I suppose there has always been a chance of hitting the jackpot.

    • lunar_mycroft 50 minutes ago
      > That's your job.

      No, that isn't. To quote your own blog, his job is to "deliver code [he's] proven to work", not to manage AI agents. The author has determined that managing AI agents is not an effective way to deliver code in the long term.

      > you don't have the agent-managerial skills to tell the coding agents how to clean up the mess they made

      The author has years of experience with AI assisted coding. Is there any way we can check to see if someone is actually skilled at using these tools besides whether they report/studies measure that they do better with them than without?

    • candiddevmike 6 hours ago
      > Going back to typing all of the code yourself (my interpretation of "writing by hand") because you don't have the agent-managerial skills to tell the coding agents how to clean up the mess they made feels short-sighted to me.

      Or those skills are a temporary side effect of the current SOTA and will be useless in the future, so honing them is pointless right now.

      Agents shouldn't make messes, if they did what they say on the tin at least, and if folks are wasting considerable time cleaning them up, they should've just written the code themselves.

    • ap99 6 hours ago
      > That's your job.

      Exactly.

      AI assisted development isn't all or nothing.

      We as a group and as individuals need to figure out the right blend of AI and human.

      • thesz 5 hours ago

          > AI assisted development isn't all or nothing.
          > We as a group and as individuals need to figure out the right blend of AI and human.
        
        This is what makes current LLM debate very much like the strong typing debate about 15-20 years ago.

        "We as a group need to figure out the right blend of strong static and weak dynamic typing."

        One can look around and see where that old discussion brought us. In my opinion, nowhere, things are same as they were.

        So, where will LLM-assisted coding bring us? By rhyming it with the static types, I see no other variants than "nowhere."

      • freedomben 6 hours ago
        Seriously. I've known for a very long time that our community has a serious problem with binary thinking, but AI has done more to reinforce that than anything I can think of in modern memory. Nearly every discussion I get into about AI is dead out of the gate because at least one person in the conversation has a binary view that it's either handwritten or vibe coded. They have an insanely difficult time imagining anything in the middle.

        Vibe coding is the extreme end of using AI, while handwriting is the extreme end of not using AI. The optimal spot is somewhere in the middle. Where exactly that spot is, I think is still up for debate. But the debate is not progressed in any way by latching on to the extremes and assuming that they are the only options.

        • kaydub 4 hours ago
          The "vibe coding" term is causing a lot of brain rot.

          Because when I see people that are downplaying LLMs or the people describing their poor experiences it feels like they're trying to "vibe code" but they expect the LLM to automatically do EVERYTHING. They take it as a failure that they have to tell the LLM explicitly to do something a couple times. Or they take it as a problem that the LLM didn't "one shot" something.

          • bandrami 3 hours ago
            I'd like it to take less time to correct than it takes me to type out the code I want and as of yet I haven't had that experience. Now, I don't do Python or JS, which I understand the LLMs are better at, but there's a whole lot of programming that isn't in Python or JS...
            • kaydub 3 hours ago
              I've had success across quite a few languages, more than just python and js. I find it insanely hard to believe you can write code faster than the LLM, even if the LLM has to iterate a couple times.

              But I'm thankful for you devs that are giving me job security.

        • anonymars 5 hours ago
          I think you will find this is not specific to this community nor AI but any topic involving nuance and trade-offs without a right answer

          For example, most political flamefests

      • kaydub 4 hours ago
        I'm only writing 5-10% of my own code at this point. The AI tools are good, it just seems like people that don't like them expect them to be 100% automatic with no hand holding.

        Like people in here complaining about how poor the tests are... but did they start another agent to review the tests? Did they take that and iterate on the tests with multiple agents?

        I can attest that the first pass of testing can often be shit. That's why you iterate.

        • Ososjjss 3 hours ago
          > I can attest that the first pass of testing can often be shit. That's why you iterate.

          So far, by the time I’m done iterating, I could have just written it myself. Typing takes like no time at all in aggregate. Especially with AI assisted autocomplete. I spend far more time reading and thinking (which I have to do to write a good spec for the AI anyways).

          • kaydub 3 hours ago
            Nope, you couldn't have written it yourself in the same time. That's just a false assumption a lot of you like to make.
    • dionian 4 hours ago
      I agree, as a pretty experienced coder, I wonder if the newer generation is just rolling with the first shot. I find myself having the AI rewrite things a slightly different way 2-3x per feature or maybe even 10x. Because i know quality when i see it, having done so much by hand and so much reading.
  • kaydub 4 hours ago
    How were you "vibe coding" 2 years ago?

    There's been such a massive leap in capabilities since claude code came out, which was middle/end of 2025.

    2 years ago I MAYBE used an LLM to take unstructured data and give me a json object of a specific structure. Only about 1 year ago did I start using llms for ANY type of coding and I would generally use snippets, not whole codebases. It wasn't until September when I started really leveraging the LLM for coding.

    • JimDabell 4 hours ago
      Vibe coding was coined less than a year ago:

      https://x.com/karpathy/status/1886192184808149383

      • furyofantares 1 hour ago
        I was vibe coding in November 2024, before the term was coined. I think that is about as early as anyone was doing it, so 1.25 years ago. Cursor added its "agentic" mode around then, I think, but before that there was just "accept all" without looking at changes repeatedly.

        I shipped a small game that way (https://love-15.com/) -- one that I've wished to make for a long time but wouldn't have been worth building other wise. It's tiny, really, but very niche -- despite being tiny, I hit brick walls multiple times vibing it, and had to take a few brief breaks from vibing to get it unstuck.

        Claude Code was a step change after that, along with model upgrades, about 9 months ago. That size project has been doable as a vibe coded project since then without hitting brick walls.

        All this to say I really doubt most claims about having been vibe coding for more than 9-15 months.

      • kridsdale1 4 hours ago
        To a describe a thing people had been doing since LLMs became available.
        • JimDabell 4 hours ago
          No. That’s why he called it “a new kind of coding”.
          • honeycrispy 2 hours ago
            "New" doesn't mean it was invented that morning. Things that are a few years old can still be considered "new".
        • fragmede 3 hours ago
          When LLMs first came out, they weren't very good at it, which makes all the difference. Sometimes the thing that's really good at something gets a different name. Chef vs cook, driver vs chauffeur, painter vs artist, programmer vs software developer, etc.
      • wavemode 3 hours ago
        Yeah, I laughed when I saw the headline.

        Now I expect to start seeing job postings asking for "3 years of experience vibe coding"

    • kridsdale1 4 hours ago
      I started doing it as soon as ChatGPT 3.5 was out. “Given this file tree and this method signature, implement the method”. The context was only 8k so you had to function by function. About two editor screens worth at a time.
      • furyofantares 1 hour ago
        Using an LLM to code isn't the same as vibe coding. Vibe coding, as originally coined, is not caring at all about the code or looking at the code. It was coined specifically to differentiate it from the type of AI-assisted coding you're talking about.

        It's used more broadly now, but still to refer to the opposite end of the spectrum of AI-assisted coding to what you described.

      • kaydub 3 hours ago
        Yeah, I've been working with LLMs since openai released that first model. What I'm doing today is VASTLY different than anything we thought possible back then, so I wouldn't call it "vibe coding"
    • Verdex 3 hours ago
      Similar place. I kept trying to get LLMs to do anything interesting and the first time they were able was 4.5 sonnet.

      Best case is still operationally correct but nightmare fuel on the inside. So maybe good for one off tools where you control inputs and can vibe check outputs without diaster if you forget to carry the one.

    • koakuma-chan 4 hours ago
      GitHub Copilot came out with AI autocomplete 2-3 years ago I believe.
      • furyofantares 1 hour ago
        Using autocomplete is very much not "vibe coding".
    • _rwo 1 hour ago
      same; I think codex with gpt5 changed things for me; then opus 4.5 turned out to be also useful (yet quite pricey)
    • pc86 4 hours ago
      Typical blogspam clickbait of "I knew what LLMs were 2 years, but maybe didn't know the name for them, so we'll call that vibecoding."
  • ncruces 6 hours ago
    > The AI had simply told me a good story. Like vibewriting a novel, the agent showed me a good couple paragraphs that sure enough made sense and were structurally and syntactically correct. Hell, it even picked up on the idiosyncrasies of the various characters. But for whatever reason, when you read the whole chapter, it’s a mess. It makes no sense in the overall context of the book and the preceding and proceeding chapters.

    This is the bit I think enthusiasts need to argue doesn't apply.

    Have you ever read a 200 page vibewritten novel and found it satisfying?

    So why do you think a 10 kLoC vibecoded codebase will be any good engineering-wise?

    • fsloth 6 hours ago
      "So why do you think a 10 kLoC vibecoded codebase will be any good engineering-wise?"

      I've been coding a side-project for a year with full LLM assistance (the project is quite a bit older than that).

      Basically I spent over a decade developing CAD software at Trimble and now have pivoted to a different role and different company. So like an addict, I of course wanted to continue developing CAD technology.

      I pretty much know how CAD software is supposed to work. But it's _a lot of work_ to put together. With LLMs I can basically speedrun through my requirements that require tons of boilerplate.

      The velocity is incredible compared to if I would be doing this by hand.

      Sometimes the LLM outputs total garbage. Then you don't accept the output, and start again.

      The hardest parts are never coding but design. The engineer does the design. Sometimes I pain weeks or months over a difficult detail (it's a sideproject, I have a family etc). Once the design is crystal clear, it's fairly obvious if the LLM output is aligned with the design or not. Once I have good design, I can just start the feature / boilerplate speedrun.

      If you have a Windows box you can try my current public alpha. The bugs are on me, not on the LLM:

      https://github.com/AdaShape/adashape-open-testing/releases/t...

      • 0xffff2 2 hours ago
        Neat project, and your experience mirrors mine when writing hobby projects.

        About the project itself, do you plan to open source if eventually? LLM discussion aside, I've long been frustrated by the lack of a good free desktop 3D CAD software.

        • fsloth 1 hour ago
          Thanks man!

          I would love to build this eventually to a real product so am not currently considering open sourcing it.

          I can give you a free foreverlicense if you would like to be an alpha tester though :) - but am considering in any case for the eventual non-commercial licenses to be affordable&forever.

          IMHO what the world needs is a good textbook on how to build CAD software. Mäntylä’s ”Solid modeling” is almost 40 years old. CAD itself is pushing 60-70 years.

          The highly non-trivial parts in my app are open source software anyways (you can check the attribution file) and what this contributes is just a specific, opinionated way of how a program like this should work in 2020’s.

          What I _would_ like to eventually contribute is a textbook in how to build something like this - and after that re-implementation would be a matter of some investment to LLM inference, testing, and end-user empathy. But that would have to wait either for my financial independence, AI-communism or my retirement :)

          • 0xffff2 55 minutes ago
            Fair enough. I was asking mostly because it looks like the current demo is Windows only. I'm trying to de-Windows my life before I'm forced onto Windows 11 and I imagine multi-platform support isn't a high priority for a personal project. I do wish you the best of luck though.
            • fsloth 46 minutes ago
              Yes, it's Windows only (mainly because testing multiple platforms as a solo dev is above my stamina).

              Thank you!

      • mattjhall 4 hours ago
        It’s amazing how often these miracle codebases that an AI has generated are always not open source.
        • fsloth 3 hours ago
          If you doubt it’s real just run it, man.

          I shared the app because it’s not confidential and it’s concrete - I can’t really discuss work stuff without stressing out what I can share and what not.

          At least in my workplace everyone I know is using Claude Code or Cursor.

          Now, I don’t know why some people are productive with tools and some aren’t.

          But the code generation capabilities are for real.

    • ashikns 6 hours ago
      Because a novel is about creative output, and engineering is about understanding a lot of rules and requirements and then writing logic to satisfy that. The latter has a much more explicitly defined output.
      • therealdrag0 4 hours ago
        Said another way, a novel is about the experience of reading every word of implementation, whereas software is sufficient to be a black box, the functional output is all that matters. No one is reading assembly for example.

        We’re moving into a world where suboptimal code doesn’t matter that much because it’s so cheap to produce.

    • causal 5 hours ago
      I like this way of framing the problem, and it might even be a good way to self-evaluate your use of AI: Try vibe-writing a novel and see how coherent it is.

      I suspect part of the reason we see such a wide range of testimonies about vibe-coding is some people are actually better at it, and it would be useful to have some way of measuring that effectiveness.

    • rahimnathwani 5 hours ago

        Have you ever read a 200 page vibewritten novel and found it satisfying?
      
      I haven't, but my son has. For two separate novels authored by GPT 4.5.

      (The model was asked to generate a chapter at a time. At each step, it was given the full outline of the novel, the characters, and a summary of each chapter so far.)

      • andai 5 hours ago
        Interesting. I heard that model was significantly better than what we ended up with (at least for writing), and they shut it down because it was huge and expensive.

        Did the model also come up with the idea for the novel, the characters, the outline?

        • rahimnathwani 4 hours ago
          For one novel, I gave the model a sentence about the idea, and the names and a few words about each of the characters.

          For the other, my son wrote ~200 words total describing the story idea and the characters.

          In each case, the model created the detailed outline and did all the writing.

    • lopatin 3 hours ago
      I don’t get the analogy because novel is supposed to be interesting. Code isn’t supposed to be interesting, it’s supposed to work.

      If you’re writing novel algorithms all day, then I get your point. But are you? Or have you ever delegated work? If you find the AI losing its train of thought all it takes is to try again with better high level instructions.

    • mrtesthah 6 hours ago
      I wrote this a day ago but I find it even more relevant to your observation:

      I would never use, let alone pay for, a fully vibe-coded app whose implementation no human understands.

      Whether you’re reading a book or using an app, you’re communicating with the author by way of your shared humanity in how they anticipate what you’re thinking as you explore the work. The author incorporates and plans for those predicted reactions and thoughts where it makes sense. Ultimately the author is conveying an implicit mental model (or even evoking emotional states or sensations) to the reader.

      The first problem is that many of these pathways and edge cases aren’t apparent until the actual implementation, and sometimes in the process the author realizes that the overall product would work better if it were re-specified from the start. This opportunity is lost without a hands on approach.

      The second problem is that, the less human touch is there, the less consistent the mental model conveyed to the user is going to be, because a specification and collection of prompts does not constitute a mental model. This can create subconscious confusion and cognitive friction when interacting with the work.

      • charcircuit 11 minutes ago
        No human understands how Windows works. The number of products where a human understands the whole thing is small.
  • mettamage 7 hours ago
    > In retrospect, it made sense. Agents write units of changes that look good in isolation. They are consistent with themselves and your prompt. But respect for the whole, there is not. Respect for structural integrity there is not. Respect even for neighboring patterns there was not.

    Well yea, but you can guard against this in several ways. My way is to understand my own codebase and look at the output of the LLM.

    LLMs allow me to write code faster and it also gives a lot of discoverability of programming concepts I didn't know much about. For example, it plugged in a lot of Tailwind CSS, which I've never used before. With that said, it does not absolve me from not knowing my own codebase, unless I'm (temporarily) fine with my codebase being fractured conceptually in wonky ways.

    I think vibecoding is amazing for creating quick high fidelity prototypes for a green field project. You create it, you vibe code it all the way until your app is just how you want it to feel. Then you refactor it and scale it.

    I'm currently looking at 4009 lines of JS/JSX combined. I'm still vibecoding my prototype. I recently looked at the codebase and saw some ready made improvements so I did them. But I think I'll start to need to actually engineer anything once I reach the 10K line mark.

    • acedTrex 6 hours ago
      > My way is to understand my own codebase and look at the output of the LLM.

      Then you are not vibe coding. The core, almost exclusive requirement for "vibe coding" is that you DON'T look at the code. Only the product outcome.

      • peacebeard 5 hours ago
        This seems to be a major source of confusion in these conversations. People do not seem to agree on the definition of vibe coding. A lot of debates seem to be between people who are using the term because it sounds cool and people who have defined it specifically to only include irresponsible tool use, then they get into a debate about if the person was being irresponsible or not. It’s not useful to have that debate based on the label rather than the particulars.
      • simonw 6 hours ago
        I don't think the OP was using the classic definition of vibe coding, it seemed to me they were using the looser definition where vibe coding means "using AI to write code".
        • acedTrex 6 hours ago
          The blog appears to imply that the author only opened the codebase after a significant period of time.

          > It’s not until I opened up the full codebase and read its latest state cover to cover that I began to see what we theorized and hoped was only a diminishing artifact of earlier models: slop.

          This is true vibe coding, they exclusively interacted with the project through the LLM, and only looked at its proposed diffs in a vacuum.

          If they had been monitoring the code in aggregate the entire time they likely would have seen this duplicative property immediately.

          • simonw 6 hours ago
            The paragraph before the one you quoted there reads:

            > What’s worse is code that agents write looks plausible and impressive while it’s being written and presented to you. It even looks good in pull requests (as both you and the agent are well trained in what a “good” pull request looks like).

            Which made me think that they were indeed reading at least some of the code - classic vibe coding doesn't involve pull requests! - but weren't paying attention to the bigger picture / architecture until later on.

      • wvenable 3 hours ago
        That is the correct definition of vibe coding "fully giv[ing] in to the vibes, embrac[ing] exponentials, and forget[ting] that the code even exists."

        I think we need another term for using an LLM to write code but absolutely not forgetting the code exists.

        I often use LLMs to do refactoring and, by definition, refactoring cannot be vibe-coding because that's caring about the code.

      • Pearse 6 hours ago
        I think this is my main confusion around vibe coding.

        Is it a skill for the layman?

        Or does it only work if you have the understanding you would need to manage a team of junior devs to build a project.

        I feel like we need a different term for those two things.

        • embedding-shape 6 hours ago
          "Vibe coding" isn't a "skill", is a meme or a experiment, something you do for fun, not for writing serious code where you have a stake in the results.

          Programming together with AI however, is a skill, mostly based on how well you can communicate (with machines or other humans) and how well your high-level software engineering skills are. You need to learn what it can and cannot do, before you can be effective with it.

        • simonw 6 hours ago
          I use "vibe coding" for when you prompt without even looking at the code - increasingly that means non-programmers are building code for themselves with zero understanding of how it actually works.

          I call the act of using AI to help write code that you review, or managing a team of coding agents "AI-assisted programming", but that's not a snappy name at all. I've also skirted around the idea of calling it "vibe engineering" but I can't quite bring myself to commit to that: https://simonwillison.net/2025/Oct/7/vibe-engineering/

      • kaydub 4 hours ago
        I don't know anybody except the bleeding edge people (Steve Yegge) or non-engineers that are actually "vibe coding" to this definition.
      • mettamage 6 hours ago
        I know what you mean but to look that black and white at it seems dismissive of the spectrum that's actually there (between vibecoding and software engineering). Looking at the whole spectrum is, I find, much more interesting.

        Normally I'd know 100% of my codebase, now I understand 5% of it truly. The other 95% I'd need to read it more carefully before I daresay I understand it.

        • embedding-shape 6 hours ago
          Call it "AI programming" or "AI pairing" or "Pair programming with AI" or whatever else, "vibe coding" was "coined" with the explicit meaning of "I'm going by the vibes, I don't even look at the code". If "vibe coding" suddenly mean "LLM was involved somehow", then what is the "vibe" even for anymore?

          I agree there is a spectrum, and all the way to the left you have "vibe coding" and all the way to the right you have "manual programming without AI", of course it's fine to be somewhere in the middle, but you're not doing "vibe coding" in the way Karpathy first meant it.

  • ratelimitsteve 1 minute ago
    2006: "If I can just write the specs so that the engineer understands them it will write me code that works."

    2026: "If I can just write the specs so that the machine understands them it will write me code that works."

  • reedf1 5 hours ago
    Karpathy coined the term vibecoding 11 months ago (https://x.com/karpathy/status/1886192184808149383). It caused quite a stir - because not only was it was a radically new concept, but fully agentic coding had only become recently possible. You've been vibe coding for two years??
    • andai 5 hours ago
      I had GPT-4 design and build a GPT-4 powered Python programmer in 2023. It was capable of self-modification and built itself out after the bootstrapping phase (where I copy pasted chunks or code based on GPT-4's instructions).

      It wasn't fully autonomous (the reliability was a bit low -- e.g. had to get the code out of code fences programmatically), and it wasn't fully original (I stole most of it from Auto-GPT, except that I was operating on the AST directly due to the token limitations).

      My key insight here was that I allowed GPT to design the apis that itself was going to use. This makes perfect sense to me based on how LLMs work. You tell it to reach for a function that doesn't exist, and then you ask it to make it exist based on how it reached for it. Then the design matches its expectations perfectly.

      GPT-4 now considers self modifying AI code to be extremely dangerous and doesn't like talking about it. Claude's safety filters began shutting down similar conversations a few months ago, suggesting the user switch to a dumber model.

      It seems the last generation or two of models passed some threshold regarding self replication (which is a distinct but highly related concept), and the labs got spooked. I haven't heard anything about this in public though.

      Edit: It occurs to me now that "self modification and replication" is a much more meaningful (and measurable) benchmark for artificial life than consciousness is...

      BTW for reference the thing that spooked Claude's safety trigger was "Did PKD know about living information systems?"

    • HarHarVeryFunny 1 hour ago
      The term was created by Karpathy, meaning one thing, but nowadays many people use the term to refer to any time they are asking AI to write code.

      You don't need a "fully agentic" tool like Claude Code to write code. Any of the AI chatbots can write code too, obviously doing so better since the advent of "thinking" models, and RL post-training for coding. They also all have had built-in "code interpreter" functionality for about 2 years where they can not only write code but also run and test it in a sandbox, at least for Python.

      Recently at least, the quality of code generation (at least if you are asking for something smallish) is good enough that cut and pasting chatbot output (e.g. C++, not Python) to compile and run yourself is still a productivity boost, although this was always an option.

    • dfajgljsldkjag 5 hours ago
      The term was coined then, but people have been doing it with claude code and cursor and copilot and other tools for longer. They just didn't have a word for it yet.
      • reedf1 4 hours ago
        Claude Code was released a month after this post - and cursor did not yet have an agent concept, mostly just integrated chat and code completion. I know because I was using it.
    • AstroBen 5 hours ago
      The author is using the term to mean AI assisted coding. Thats been around for longer than the word vibe coding
      • andai 5 hours ago
        This remains a point of great confusion every time there is such a discussion.

        When some people say vibe coding, they mean they're copy-pasting snippets of code from ChatGPT.

        When some people say vibe coding, they give a one sentence prompt to their cluster of Claude Code instances and leave for a road trip!

    • jv22222 5 hours ago
      Very good point. Also, What the OP describes is something I went through in the first few months of coding with AI. I pushed passed “the code looks good but it’s crap” phase and now it’s working great. I’ve found the fix is to work with it during research/planning phase and get it to layout all its proposed changes and push back on the shit. Once you have a research doc that looks good end to end then hit “go”.
    • 9rx 4 hours ago
      I have only ever successfully tried "vibe coding", as Kaparathy describes it, once, soon after VS Code Copilot added the chat feature, but timestamps tell that was in November 2023. So two years is quite realistic.
    • kaydub 3 hours ago
      Yeah, that's what I pointed out.

      Just more FUD from devs that think they're artisans.

    • bschmidt800 5 hours ago
      [dead]
  • noisy_boy 3 hours ago
    Are engineers really doing vibecoding in the truest sense of the word though? Just blindly copy/pasting and iterating? Because I don't. It is more of sculpting via conversation. I start with the requirements, provide some half-baked ideas or approaches that I think may work and then ask what the LLM suggests and whether there are better ways to achieve the goals. Once we have some common ground, I ask to show the outlines of the chosen structure: the interfaces, classes, test uses. I review it, ask more questions/make design/approach changes until I have something that makes sense to me. Only then the fully fleshed coding starts and even then I move at a deliberate pace so that I can pause and think about it before moving on to the next step. It is by no means super fast for any non-trivial task but then collaborating with anyone wouldn't be.

    I also like to think that I'm utilising the training done on many millions of lines of code while still using my experience/opinions to arrive at something compared to just using my fallible thinking wherein I could have missed some interesting ideas. Its like me++. Sure, it does a lot of heavy lifting but I never leave the steering wheel. I guess I'm still at the pre-agentic stage and not ready to letting go fully.

  • CodeWriter23 1 hour ago
    My high school computer lab instructor would tell me when I was frustrated that my code was misbehaving, "It's doing exactly what you're telling it to do".

    Once I mastered the finite number of operations and behaviors, I knew how to tell "it" what to do and it would work. The only thing different about vibe coding is the scale of operations and behaviors. It is doing exactly what you're telling it to do. And also expectations need to be aligned. Don't think you can hand over architecture and design to the LLM; that's still your job. The gain is, the LLM will deal with the proper syntax, api calls, etc. and work as a reserach tool on steroids if you also (from another mentor later in life) ask good questions.

    • danjl 54 minutes ago
      "I really hate this damn machine. I wish that they would sell it. It never does what I want it to, only what I tell it."
  • maurits 4 hours ago
    I tell my students that they can watch sports on tv, but it will not make them fit.

    On a personal note, vibe coding leaves me with that same empty hollow sort of tiredness, as a day filled with meetings.

    • graydsl 4 hours ago
      Last week I just said f it and developed a feature by hand. No Copilot, no agents. Just good old typing and a bit of Intellisense. I ran into a lot of problems with the library I used, slowly but surely I got closer to the result I wanted. In the end my feature worked as expected, I understand the code I wrote and know about all the little quirks the lib has.

      And as a added benefit: I feel accomplished and proud of the feature.

    • 0xffff2 1 hour ago
      I work in an environment where access to LLMs is still quite restricted, so I write most of my code by hand at work. Conversely, after work I still have ideas for personal projects but mostly didn't have the energy to write them by hand. The ability to throw a half-baked idea at the LLM and get back half-baked code that runs and does most of what I asked for gives me the energy to work through refactoring and improving the code to make it do what I actually envisioned.
  • gregfjohnson 56 minutes ago
    One use case that I'm beginning to find useful is to go into a specific directory of code that I have written and am working on, and ask the AI agent (Claude Code in my case) "Please find and list possible bugs in the code in this directory."

    Then, I can reason through the AI agent's responses and decide what if anything I need to do about them.

    I just did this for one project so far, but got surprisingly useful results.

    It turns out that the possible bugs identified by the AI tool were not bugs based on the larger context of the code as it exists right now. For example, it found a function that returns a pointer, and it may return NULL. Call sites were not checking for a NULL return value. The code in its current state could never in fact return a NULL value. However, future-proofing this code, it would be good practice to check for this case in the call sites.

  • dv_dt 6 hours ago
    I think there is going to be an AI eternal summer. Both from developer to AI spec - where the AI implements to the spec to some level of quality, but then closing the gap after that is an endless chase of smaller items that don't all resolve at the same time. And from people getting frustrated with some AI implemented app, and so go off and AI implement another one, with a different set of features and failings.
  • aerhardt 1 hour ago
    I don't predict ever going back to writing code by hand except in specific cases, but neither do I "vibe code" - I still maintain a very close control on the code being committed and the overall software design.

    It's crazy to me nevertheless that some people can afford the luxury to completely renounce AI-assisted coding.

  • drowntoge 4 hours ago
    I always scaffold for AI. I write the stub classes and interfaces and mock the relations between them by hand, and then ask the agent to fill in the logic. I know that in many cases, AI might come up with a demonstrably “better” architecture than me, but the best architecture is the one that I’m comfortable with, so it’s worse even if it’s better. I need to be able to find the piece of code I’m looking for intuitively and with relative ease. The agent can go as crazy as it likes inside a single, isolated function, but I’m always paranoid about “going too far” and losing control of any flows that span multiple points in the codebase. I often discard code that is perfectly working just because it feels unwieldy and redo it.

    I’m not sure if this counts as “vibe coding” per se, but I like that this mentality keeps my workday somewhat similar to how it was for decades. Finding/creating holes that the agent can fill with minimal adult supervision is a completely new routine throughout my day, but I think obsessing over maintainability will pay off, like it always has.

  • Painsawman123 4 hours ago
    In the long run, vb coding is going to undoubtedly rot people’s skills.if AGI is not showing up anytime soon, actually understanding what the code does,why it exists,how it breaks and who owns the fallout will matter just as much as it did before LLM agents showed up

    it'll be really interesting to see in the decades to come what happens when a whole industry gets used to releasing black boxes by vb coding the hell out of it

  • rtp4me 6 hours ago
    I never trust the opinion of a single LLM model anymore - especially for more complex projects. I have seen Claude guarantee something is correct and then immediately apologize when I feed a critical review by Codex or Gemini. And, many times, the issues are not minor but are significant critical oversights by Claude.

    My habit now: always get a 2nd or 3rd opinion before assuming one LLM is correct.

    • kaydub 3 hours ago
      Happy to see someone else doing this.

      All code written by an LLM is reviewed by an additional LLM. Then I verify that review and get one of the agents to iterate on everything.

      • rtp4me 3 hours ago
        Agreed. From my experience, Claude is the top-level coder, Gemini is the architect, and Codex is really good at finding bugs and logic errors. In fact, Codex seems to perform better deep analysis than the other two.
        • kaydub 2 hours ago
          I just round robin them until I run out on whatever subscription level I'm on. I only use claude api, so I pay per token there... I consider using claude as "bringing out the big guns" because I also think it's the top-level coder.
    • ozten 6 hours ago
      It doesn’t have to be different foundation models. As long as the temperature is up, as the same model 100 times.
  • bovermyer 2 hours ago
    Interacting with LLMs like Copilot has been most interesting for me when I treat it like a rubber duck.

    I will have a conversation with the agent. I will present it with a context, an observed behavior, and a question... often tinged with frustration.

    What I get out of this interaction at the end of it is usually a revised context that leads me figure out a better outcome. The AI doesn't give me the outcome. It gives me alternative contexts.

    On the other hand, when I just have AI write code for me, I lose my mental model of the project and ultimately just feel like I'm delaying some kind of execution.

    • Geste 1 hour ago
      It takes some skill (that you seem to have) to learn to use those LLM. Sad that not everyone see it that way...
  • sailfast 6 hours ago
    I felt everything in this post quite emphatically until the “but I’m actually faster than the AI.”

    Might be my skills but I can tell you right now I will not be as fast as the AI especially in new codebases or other languages or different environments even with all the debugging and hell that is AI pull request review.

    I think the answer here is fast AI for things it can do on its own, and slow, composed, human in the loop AI for the bigger things to make sure it gets it right. (At least until it gets most things right through innovative orchestration and model improvement moving forward.)

    • dylanowen 4 hours ago
      But those are the parts where it's important to struggle through the learning process even if you're slower than AI. if you defer to an LLM because it can do your work in a new codebase faster than you, that code base will stay new to you for forever. You'll never be able to review the AI code effectively.
  • altern8 6 hours ago
    I think that something in between works.

    I have AI build self-contained, smallish tasks and I check everything it does to keep the result consistent with global patterns and vision.

    I stay in the loop and commit often.

    Looks to me like the problem a lot of people are having is that they have AI do the whole thing.

    If you ask it "refactor code to be more modern", it might guess what you mean and do it in a way you like it or not, but most likely it won't.

    If you keep tasks small and clearly specced out it works just fine. A lot better than doing it by hand in many cases, specially for prototyping.

  • AstroBen 5 hours ago
    The author also has multiple videos on his YouTube channel going over the specific issues hes had with AI that I found really interesting: https://youtube.com/@atmoio
  • andai 5 hours ago
    It probably depends on what you're doing, but my use case is simple straightforward code with minimal abstraction.

    I have to go out of my way to get this out of llms. But with enough persuasion, they produce roughly what I would have written myself.

    Otherwise they default to adding as much bloat and abstraction as possible. This appears to be the default mode of operation in the training set.

    I also prefer to use it interactively. I divide the problem to chunks. I get it to write each chunk. The whole makes sense. Work with its strengths and weaknesses rather than against them.

    For interactive use I have found smaller models to be better than bigger models. First of all because they are much faster. And second because, my philosophy now is to use the smallest model that does the job. Everything else by definition is unnecessarily slow and expensive!

    But there is a qualitative difference at a certain level of speed, where something goes from not interactive to interactive. Then you can actually stay in flow, and then you can actually stay consciously engaged.

  • zem 1 hour ago
    I've never used an AI in agent mode (and have no particular plans to), but I do think they're nice for things like "okay, I have moved five fields from this struct into a new struct which I construct in the global setup function. go through and fix all the code that uses those fields". (deciding to move those fields into a new struct is something I do want to be doing myself though, as opposed to saying "refactor this code for me")
  • ecshafer 4 hours ago
    I never really got onto "vibe coding". I treat AI as a better auto-complete that has stack overflow knowledge.

    I am writing a game in Monogame, I am not primarily a game dev or a c sharp dev. I find AI is fantastic here for "Set up a configuration class for this project that maps key bindings" and have it handle the boiler plate and smaller configuration. Its great at give me an A start implementation for this graph. But when it becomes x -> y -> z without larger contexts and evolutions it falls flat. I still need creativity. I just don't worry too much about boiler plate, utility methods, and figuring out specifics of wiring a framework together.

  • periodjet 1 hour ago
    Great engagement-building post for the author’s startup, blog, etc. Contrarian and just plausible enough.

    I disagree though. There’s no good reason that careful use of this new form of tooling can’t fully respect the whole, respect structural integrity, and respect neighboring patterns.

    As always, it’s not the tool.

  • yawnxyz 4 hours ago
    I like to use AI to write code for me, but I like to take it one step at a time, looking at what it puts out and thinking about if it puts out what I want it to put out.

    As a PRODUCT person, it writes code 100x faster than I can, and I treat anything it writes as a "throwaway" prototype. I've never been able to treat my own code as throwaway, because I can't just throw away multiple weeks of work.

    It doesn't aid in my learning to code, but it does aid in me putting out much better, much more polished work that I'm excited to use.

  • hgs3 5 hours ago
    I'm flabbergasted why anyone would voluntarily vibe code anything. For me, software engineering is a craft. You're supposed to enjoy building it. You should want to do it yourself.
    • wvenable 3 hours ago
      Not everything can be built by one person. This is why a lot of software requires entire teams of developers. And someone has to have vision of that completed software and wants it made even if they had to delegate to other people. I hate to think that none of these people enjoy their job.
    • doug_durham 4 hours ago
      Do you honestly get satisfaction out of writing code that you've written dozens of times in your career? Does writing yet another REST client endpoint fill you with satisfaction? Software is my passion, but I want to write code where I can add the maximum value. I add more value by using my experience solving new problems that rehashing code I've written before. Using GenAI as a helper tool allows me to quickly write the boilerplate and get to the value-add. I review every line of code written before sending it for PR review. That's not controversial, it's just good engineering.
      • Ronsenshi 4 hours ago
        Sounds like eventually we will end up in a situations where engineers/developers will end up on an AI spectrum:

        - No ai engineers - Minimal AI autocomplete engineers - Simple agentic developers - Vibe coders who review code they get - Complete YOLO vibe coders who have no clue how their "apps" work

        And that spectrum will also correlate to the skill level in engineering: from people who understand what they are doing and what their code is doing - to people who have lost (or never even had) software engineering skills and who only know how to count lines of code and write .md files.

    • kaydub 3 hours ago
      It's not a craft.

      We're modern day factory workers.

  • timcobb 7 hours ago
    I'm impressed that this person has been vibecoding longer than vibecoding has been a thing. A real trailblazer!
    • mossTechnician 7 hours ago
      GitHub copilot was released in 2021, and Cursor was released around October 2023[0].

      [0]: https://news.ycombinator.com/item?id=37888477

      • timcobb 6 hours ago
        At the earliest, "vibecoding" was only possible with Claude 3.5, released July 2024 ... maaaybe Claude 3, released in March of that year...

        It's worth mentioning that even today, Copilot is an underwhelming-to-the-point-obstructing kind of product. Microsoft sent salespeople and instructors to my job, all for naught. Copilot is a great example of how product > everything, and if you don't have a good product... well...

        • Mashimo 6 hours ago
          Is Claude through Github Copilot THAT much worse? I know there are differences, but I don't find it to be obstructing my vibe coding.
          • kaydub 3 hours ago
            Yes. Copilot sucks. Copilot is like a barely better intellisense/auto-complete, especially when it came out. It was novel and cool back then but it has been vastly surpassed by other tools.
            • Mashimo 2 hours ago
              > Copilot is like a barely better intellisense/auto-complete

              As I have never tried Claude Code, I can't say how much better it is. But Copilot is definitely more then auto-complete. Like I already wrote, it can do Planning mode, edit mode, mcp, tool calling, web searches.

              • kaydub 1 hour ago
                I just feel like using Copilot would be like early car designers trying to steer their new car with reins.
                • Mashimo 1 hour ago
                  Have you used it recently?

                  Or any specific that caused this feeling

              • the_af 2 hours ago
                Yeah, same. I have never tried Claude Code but use Claude through the Copilot plugin, and it's NOT auto-complete. It can analyze and refactor code, write new code, etc.
          • timcobb 6 hours ago
            I haven't tried it since 9-12 months ago. At the time it was really bad and I had a lot more success copy/pasting from web interfaces. Is it better now? Can you agentic code with it? How's the autocomplete?
            • Mashimo 5 hours ago
              Yes, I vibecoded small personal apps from start to finish with it. Planning mode, edit mode, mcp, tool calling, web searches. Can easily switch between Gemini, ChatGPT, Grok or Claude within the same conversation. I think multiple agents work, though not sure.

              All under one subscription.

              Does not support upload / reading of PDF files :(

            • the_af 2 hours ago
              > Can you agentic code with it?

              Yes, definitely. I use it mostly in Agent mode, then switch to Ask mode to ask it questions.

              > How's the autocomplete?

              It works reasonably well, but I'm less interested in autocomplete.

          • yourapostasy 5 hours ago
            In the enterprise deployments of GitHub Copilot I've seen at my clients that authenticate over SSO (typically OIDC with OAuth 2.0), connecting Copilot to anything outside of what Microsoft has integrated means reverse engineering the closed authentication interface. I've yet to run across someone's enterprise Github Copilot where the management and administrators have enabled the integration (the sites have enabled access to Anthropic models within the Copilot interface, but not authorized the integration to Claude Code, Opencode, or similar LLM coding orchestration tooling with that closed authentication interface).

            While this is likely feasible, I imagine it is also an instant fireable offense at these sites if not already explicitly directed by management. Also not sure how Microsoft would react upon finding out (never seen the enterprise licensing agreement paperwork for these setups). Someone's account driving Claude Code via Github Copilot will also become a far outlier of token consumption by an order(s) of magnitude, making them easy to spot, compared to their coworkers who are limited to the conventional chat and code completion interfaces.

            If someone has gotten the enterprise Github Copilot integration to work with something like Claude Code though (simply to gain access to the models Copilot makes available under the enterprise agreement, in a blessed golden path by the enterprise), then I'd really like to know how that was done on both the non-technical and technical angles, because when I briefly looked into it all I saw were very thorny, time-consuming issues to untangle.

            Outside those environments, there are lots of options to consume Claude Code via Github Copilot like with Visual Studio Code extensions. So much smaller companies and individuals seem to be at the forefront of adoption for now. I'm sure this picture will improve, but the rapid rate of change in the field means those whose work environment is like those enterprise constrained ones I described but also who don't experiment on their own will be quite behind the industry leading edge by the time it is all sorted out in the enterprise context.

      • vincentkriek 6 hours ago
        Github copilot used to only be in line completion. That is not vibe coding.
        • the_af 2 hours ago
          I wasn't an early adopter of Copilot, but now the VSCode plugin can use Claude models in Agent mode. I've had success with this.

          I don't "vibecode" though, if I don't understand what it's doing I don't use it. And of course, like all LLMs, sometimes it goes on a useless tangent and must be reigned in.

      • reedf1 4 hours ago
        Early cursor was just integrated chat and code completion. No agents.
      • steviedotboston 5 hours ago
        was github copilot LLM based in 2021? I thought the first version was something more rudimentary
    • rpigab 5 hours ago
      It seems the term has been introduced by Andrej Karpathy in February 2025, so yes, but very often, people say "vibe coding" when they mean "heavily (or totally) LLM-assisted coding", which is not synonymous, but sounds better to them.
  • raphinou 4 hours ago
    I use ai to develop, but at every code review I find stuff to be corrected, which motivates me to continuing the reviews. It's still a win I think though. I've incrementally increased my use of ai in development [1], but I'm at a plateau now I think. I don't plan to go over to complete vibe coding for anything serious or to be maintained.

    1: https://asfaload.com/blog/ai_use/

  • ramon156 5 hours ago
    +1, ive lost the mental model of most projects. I also added disclaimers to my projects that part of it was generated to not fool anyone
  • xcodevn 5 hours ago
    My observation is that vibe-coded applications are significantly lower quality than traditional software. Anthropic software (which they claim to be 90% vibe coded) is extremely buggy, especially the UI.
    • gowld 5 hours ago
      That's a misunderstanding based on loose definition of "vibe coding". When companies threw around the "90% of code is written by AI" claims, they were referring to counting characers of autocomplete basing on users actually typing code (most of which was eequivalent to "AI generated" code by Eclipse tab-completion decade ago), and sometimes writing hyperlocal prompts for a single method.

      We can identify 3 levels of "vibe coding":

      1. GenAI Autocomplete

      2. Hyperlocal prompting about a specific function. (Copilot's orginal pitch)

      3. Developing the app without looking at code.

      Level 3 is hardly considered "vibe" coding, and Level 2 is iffy.

      "90% of code written by AI" in some non-trivial contexts only very recently reached level 3.

      I don't think it ever reached Level 2, because that's just a painfully tedious way of writing code.

      • xcodevn 4 hours ago
        I believe Anthropic is already doing Level 3 vibe coding for >90% of their code.
        • doug_durham 4 hours ago
          They have not said that. They've only said that most of their code is written by Claude. That is different than "vibe coding". If competent engineers review the code then it is little different than any coding.
          • xcodevn 4 hours ago
            IIRC, the Claude Code creator mentioned that all the PRs are reviewed by humans, just like normal human PRs. So yes, humans still look at the code at the review stage. Though I still consider this to be level 3, but anyway, this is just a matter of definition.
      • andai 4 hours ago
        I mostly work at level 2, and I call it "power coding", like power armor, or power tools. Your will and your hand still guides the process continuously. But now your force is greatly multiplied.
  • gary17the 2 hours ago
    > In retrospect, it made sense. Agents write units of changes that look good in isolation. They are consistent with themselves and your prompt. But respect for the whole, there is not. Respect for structural integrity there is not. Respect even for neighboring patterns there was not.

    That's exactly why this whole (nowadays popular) notion of AI replacing senior devs who are capable of understanding large codebases is nonsense and will never become reality.

  • arendtio 5 hours ago
    There is certainly some truth to this, but why does it have to be black-and-white?

    Nobody forces you to completely let go of the code and do pure vibe coding. You can also do small iterations.

  • throwawayffffas 1 hour ago
    I hear a lot of "I am not a good enough coder..." "It has all the sum of human knowledge..."

    That's a very bad way to look at these tools. They legit know nothing, they hallucinate APIs all the time.

    The only value they have at least in my book is they type super fast.

  • kmatthews812 3 hours ago
    Beware the two extremes - AI out of the box with no additional config, or writing code entirely by hand.

    In order to get high accuracy PRs with AI (small, tested commits that follow existing patterns efficiently), you need to spend time adding agents (claude.md, agents.md), skills, hooks, and tools specific to your setup.

    This is why so much development is happening at the plugin layer right now, especially with Claude code.

    The juice is worth the squeeze. Once accuracy gets high enough you don't need to edit and babysit what is generated, you can horizontally scale your output.

  • pmontra 4 hours ago
    In my experience it's great a writing sample code or solving obscure problems that would have been hard to google a solution for. However it fails sometimes and it can't get past some block, but neither can I unless I work hard at it.

    Examples.

    Thanks to Claude I've finally been able to disable the ssh subsystem of the GNOME keyring infrastructure that opens a modal window asking for ssh passhprases. What happened is that I always had to cancel the modal, look for the passhprase in my password manager, restart what made the modal open. What I have now is either a password prompt inside a terminal or a non modal dialog. Both ssh-add to a ssh agent.

    However my new emacs windows still open in an about 100x100 px window on my new Debian 13 install, nothing suggested by Claude works. I'll have to dig into it but I'm not sure that's important enough. I usually don't create new windows after emacs starts with the saved desktop configuration.

  • charcircuit 1 hour ago
    This is not my experience at all. Claude will ask me follow up questions if it has some. The claim that it goes full steam ahead on its original plan is false.
  • flankstaek 4 hours ago
    Maybe I'm "vibecoding" wrong but to me at least this misses a clear step which is reviewing the code.

    I think coding with an AI changes our role from code writer to code reviewer, and you have to treat it as a comprehensive review where you comment not just on code "correctness" but these other aspects the author mentions, how functions fits together, codebase patterns, architectural implications. While I feel like using AI might have made me a lazier coder, it's made me a me a significantly more active reviewer which I think at least helps to bridge the gap the author is referencing.

  • asdfman123 2 hours ago
    AI is a good tutor, helping you understand what's going on with the codebase, and also helps with minor autocomplete tasks.

    You should never just let AI "figure it out." It's the assistant, not the driver.

  • sheepscreek 4 hours ago
    Good for the author. Me, never going back to hands-only coding. I am producing more higher quality code that I understand and feel confident in. I tell AI to not just “write tests”, I tell it exactly what to test as well. Then I’ll often prompt it “hey did you check for the xyz edge cases?” You need code reviews. You need to intervene. You will need frequent code rewrites and refactors. But AI is the best pair-coding partner you could hope for (at this time) and one that never gets tired.

    So while there’s no free lunch, if you are willing to pay - your lunch will be a delicious unlimited buffet for a fraction of the cost.

  • wessorh 1 hour ago
    An excellent example of the political utility of AI, and how long it takes to figure out that it isn't as useful as the hype might make you think.
  • spicymaki 5 hours ago
    I think what many people do no understand is that software development is communication. Communication from the customers/stake holders to the developer and communication from with the developer to the machine. At some fundamental level there needs to be some precision about what you want and someone/something needs to translate that into a system to provide that solution. Software can help check if there are errors, check constraints, and execute instructions precisely, but they cannot replace the fact that someone needs to tell the machine what to do (precise intent).

    What AI (LLMs) do is raises the level of abstraction to human language via translation. The problem is human language is imprecise in general. You can see this with legal or science writing. Legalese is almost illegible to laypeople because there are precise things you need to specify and you need be precise in how you specify it. Unfortunately the tech community is misleading the public and telling laypeople they can just sit back and casually tell AI what you want and it is going to give you exactly what you wanted. Users are just lying to themself, because most-likely they did not take the time to think through what they wanted and they are rationalizing (after the fact) that the AI is giving them exactly what they wanted.

  • pnathan 1 hour ago
    a lot of AI assisted development goes into project management and system design.

    I have been tolerably successful. However, I have almost 30 years of coding experience, and have the judgement on how big a component should be - when I push that myself _or_ with AI, things go hairy.

    ymmv.

  • jstummbillig 5 hours ago
    The tale of the coder, who finds a legacy codebase (sometimes of their own making) and looks at it with bewilderment is not new. It's a curious one, to a degree, but I don't think it has much to do with vibe coding.
  • dudeinhawaii 5 hours ago
    After reading the article (and watching the video), I think the author makes very clear points that comments here are skipping over.

    The opener is 100% true. Our current approach with AI code is "draft a design in 15mins" and have AI implement it. The contrasts with the thoughtful approach a human would take with other human engineers. Plan something, pitch the design, get some feedback, take some time thinking through pros and cons. Begin implementing, pivot, realizations, improvements, design morphs.

    The current vibe coding methodology is so eager to fire and forget and is passing incomplete knowledge unto an AI model with limited context, awareness and 1% of your mental model and intent at the moment you wrote the quick spec.

    This is clearly not a recipe for reliable and resilient long-lasting code or even efficient code. Spec-driven development doesn't work when the spec is frozen and the builder cannot renegotiate intent mid-flight..

    The second point made clearer in the video is the kind of learned patterns that can delude a coder, who is effectively 'doing the hard part', into thinking that the AI is the smart one. Or into thinking that the AI is more capable than it actually is.

    I say this as someone who uses Claude Code and Codex daily. The claims of the article (and video) aren't strawman.

    Can we progress past them? Perhaps, if we find ways to have agents iteratively improve designs on the fly rather than sticking with the original spec that, let's be honest, wasn't given the rigor relative to what we've asked the LLMs to accomplish. If our workflows somehow make the spec a living artifact again -- then agents can continuously re-check assumptions, surface tradeoffs, and refactor toward coherence instead of clinging to the first draft.

    • Lerc 3 hours ago
      >Our current approach with AI code is "draft a design in 15mins" and have AI implement it. The contrasts with the thoughtful approach a human would take with other human engineers. Plan something, pitch the design, get some feedback, take some time thinking through pros and cons. Begin implementing, pivot, realizations, improvements, design morphs.

      Perhaps that is the distinction between reports of success with AI and reports of abject failure. Your description of "Our current approach" is nothing like how I have been working with AI.

      When I was making some code to do a complex DMA chaining, the first step with the AI was to write an emulator function that produced the desired result from the parameters given in software. Then a suite of tests with memory to memory operations that would produce a verifiable output. Only then started building the version that wrote to the hardware registers ensuring that the hardware produced the same memory to memory results as the emulator. When discrepancies occurred, checking the test case, the emulator and the hardware with the stipulation that the hardware was the ground truth of behaviour and the test case should represent the desired result.

      I occasionally ask LLMs to one shot full complex tasks, but when I do so it is more as a test to see how far it gets. I'm not looking to use the result, I'm just curious as to what it might be. The amount of progress it makes before getting lost is advancing at quite a rate.

      It's like seeing an Atari 2600 and expecting it to be a Mac. People want to fly to the moon with Atari 2600 level hardware. You can use hardware at that level to fly to the moon, and flying to the moon is an impressive achievement enabled by the hardware, but to do so you have to wrangle a vast array of limitations.

      They are no panacea, but they are not nothing. They have been, and will remain, somewhere between for some time. Nevertheless they are getting better and better.

    • kaydub 3 hours ago
      You can update the spec as you go... There's nothing that makes the spec concrete and unchangeable.
  • INTPenis 3 hours ago
    I haven't been vibe coding for more than a few months.

    It's just a tool with a high level of automation. That becomes clear when you have to guide it to use more sane practices, simple things like don't overuse HTTP headers when you don't need them.

  • jdlyga 5 hours ago
    I've gone through this cycle too, and what I realized is that as a developer a large part of your job is making sure the code you write works, is maintainable, and you can explain how it works.
  • jrm4 5 hours ago
    I feel like the vast majority of articles on this are little more than the following:

    "AI can be good -- very good -- at building parts. For now, it's very bad at the big picture."

  • edunteman 3 hours ago
    The part that most resonates with me is the lingering feeling of “oh but it must be my fault for underspecifying” which blocks the outright belief that models are just still sloppy at certain things
  • BinaryIgor 3 hours ago
    I don't know whether I would go that extreme, but I also often find myself faster writing code manually; for some tasks though and contextually, AI-assisted coding is pretty useful, but you still must be in the driving seat, at all times.

    Good take though.

  • shas3 4 hours ago
    I don't get what everyone sees in this post. It is just a sloppy rant. It just talks in generalities. There is no coherent argument, there are no examples, and we don't even know the problem space in which the author had bad coding assistant experience.
  • bobjordan 4 hours ago
    Process and plumbing become very important when using ai for coding. Yes, you need good prompts. But as the code base gets more complex, you also need to spend significant time developing test guides, standardization documents, custom linters, etc, to manage the agents over time.
  • dudeinhawaii 48 minutes ago
    On the one hand, I created vibe coded a large-ish (100k LOC) C#, Python, Powershell project over the holidays. The whole thing was more than I could ever complete on my own in the 5 days it took to vibe code using three agents. I wrote countless markdown 'spec' files, etc.

    The result stunned everyone I work with. I would never in a million years put this code on Github for others. It's terrible code for a myriad reasons.

    My lived experience was... the task was accomplished but not in a sustainable way over the course of perhaps 80 individual sessions with the longest being multiple solid 45 minute refactors...(codex-max)

    About those. One of things I spotted fairly quickly was the tendency of models to duplicate effort or take convoluted approaches to patch in behaviors. To get around this, I would every so often take the entire codebase, send it to Gemini-3 Pro and ask it for improvements. Comically, every time, Gemini-3-Pro responds with "well this code is hot garbage, you need to refactor these 20 things". Meanwhile, I'm side-eying like.. dude you wrote this. Never fails to amuse me.

    So, in the end, the project was delivered, was pretty cool, had 5x more features than I would have implemented myself and once I got into a groove -- I was able to reduce the garbage through constant refactors from large code reviews. Net Positive experience on a project that had zero commercial value and zero risk to customers.

    But on the other hand...

    I spend a week troubleshooting a subtle resource leak (C#) on a commercial project that was introduced during a vibe-coding session where a new animation system was added and somehow added a bug that caused a hard crash on re-entering a planet scene.

    The bug caused an all-stop and a week of lost effort. Countless AI Agent sessions circularly trying to review and resolve it. Countless human hours of testing and banging heads against monitors.

    In the end, on the maybe random 10th pass using Gemini-3-Pro it provided a hint that was enough to find the issue.

    This was a monumental fail and if game studios are using LLMs, good god, the future of buggy mess releases is only going to get worse.

    I would summarize this experience as lots of amazement and new feature velocity. A little too loose with commits (too much entanglement to easily unwind later) and ultimately a negative experience.

    A classic Agentic AI experience. 50% Amazing, 50% WTF.

  • billynomates 6 hours ago
    False dichotomy. There is a happy medium where you can orchestrate the agent to give you the code you want even when the spec changes
  • douglaswlance 6 hours ago
    unless someone shows their threads of prompts or an unedited stream of them working, it's pointless to put any weight into their opinions.

    this is such an individualized technology that two people at the same starting point two years ago, could've developed wildly different workflows.

    • jdauriemma 6 hours ago
      That's the sad part. Empiricism is scarce when people and companies are incentivized to treat their AI practices as trade secrets. It's fundamentally distinct from prior software movements which were largely underwritten by open, accessible, and permissively-licensed technologies.
      • kaydub 3 hours ago
        I don't see people treating AI practices as trade secrets. It's just the nature of a non-deterministic system.
  • eddyg 4 hours ago
    Previous discussion on the video: https://news.ycombinator.com/item?id=46744572
  • throwawayffffas 1 hour ago
    You'll never find a programming language that frees you from the burden of clarifying your ideas.

    Relevant xkcd: https://xkcd.com/568/

    Even if we reach the point where it's as good as a good senior dev. We will still have to explain what we want it to do.

    That's how I find it most helpful too. I give it a task and work out the spec based on the bad assumptions it makes and manually fix it.

  • advael 3 hours ago
    I feel vindicated by this article, but I shouldn't. I have to admit that I never developed the optimism to do this for two years, but have increasingly been trying to view this as a personal failing of closed-mindedness, brought on by an increasing number of commentators and colleagues coming around to "vibe-coding" as each "next big thing" in it dropped.

    I think the most I can say I've dove in was in the last week. I wrangled some resources to build myself a setup with a completely self-hosted and agentic workflow and used several open-weight models that people around me had specifically recommended, and I had a work project that was self-contained and small enough to work from scratch. There were a few moving pieces but the models gave me what looked like a working solution within a few iterations, and I was duly impressed until I realized that it wasn't quite working as expected.

    As I reviewed and iterated on it more with the agents, eventually this rube-goldberg machine started filling in gaps with print statements designed to trick me and sneaky block comments that mentioned that it was placeholder code not meant for production in oblique terms three lines into a boring description of what the output was supposed to be. This should have been obvious, but even at this point four days in I was finding myself missing more things, not understanding the code because I wasn't writing it. This is basically the automation blindness I feared from proprietary workflows that could be changed or taken away at any time, but much faster than I had assumed, and the promise of being able to work through it at this higher level, this new way of working, seemed less and less plausible the more I iterated, even starting over with chunks of the problem in new contexts as many suggest didn't really help.

    I had deadlines, so I gave up and spent about half of my weekend fixing this by hand, and found it incredibly satisfying when it worked, but all-in this took more time and effort and perhaps more importantly caused more stress than just writing it in the first place probably would have

    My background is in ML research, and this makes it perhaps easier to predict the failure modes of these things (though surprisingly many don't seem to), but also makes me want to be optimistic, to believe this can work, but I also have done a lot of work as a software engineer and I think my intuition remains that doing precision knowledge work of any kind at scale with a generative model remains A Very Suspect Idea that comes more from the dreams of the wealthy executive class than a real grounding in what generative models are capable of and how they're best employed.

    I do remain optimistic that LLMs will continue to find use cases that better fit a niche of state-of-the-art natural language processing that is nonetheless probabilistic in nature. Many such use cases exist. Taking human job descriptions and trying to pretend they can do them entirely seems like a poorly-thought-out one, and we've to my mind poured enough money and effort into it that I think we can say it at the very least needs radically new breakthroughs to stand a chance of working as (optimistically) advertised

  • legitster 3 hours ago
    The author makes it sound like such a binary choice, but there's a happy middle where you are having AI generate large blocks of code and then you closely supervise it. My experience so far with AI is to treat it like you're a low-level manager delegating drudgework. I will regularly rewrite or reorganize parts of the code and give it back to the AI to reset the baseline and expectations.

    AI is far from perfect, but the same is true about any work you may have to entrust to another person. Shipping slop because someone never checked the code was literally something that happened several times at startups I have worked at - no AI necessary!

    Vibecoding is an interesting dynamic for a lot of coders specifically because you can be good or bad at vibecoding - but the skill to determine your success isn't necessarily your coding knowledge but your management and delegation soft skills.

  • cubanhackerai 3 hours ago
    Taking crazy pills here too.

    I just bootstrapped a 500k loc MVP with AI Generator, Community and Zapier integration.

    www.clases.community

    And is my 3rd project that size, fully vibe coded

  • erelong 2 hours ago
    The impression I get from the article is of a need to develop better prompts and/or break them down more
  • leesec 5 hours ago
    OK. Top AI labs have people using llms for 100% of their code. Enjoy writing by hand tho
    • bopbopbop7 1 hour ago
      > Company that builds x says that everyone in company uses x.

      Have people always been this easy to market to?

    • kylehotchkiss 4 hours ago
      "They got more VC than me, therefore they are right".

      You gotta have a better argument than "AI Labs are eating their own dogfood". Are there any other big software companies doing that successfully? I bet yes, and think those stories carry more weight.

      • leesec 4 hours ago
        These are the smartest people in tech lol.
  • rglover 3 hours ago
    You can do both. It's not binary.
  • JimmaDaRustla 37 minutes ago
    k
  • aaronrobinson 5 hours ago
    I read that people just allow Claude Code free rein but after using it for a few months and seeing what it does I wonder how much of that is in front of users. CC is incredible as much as it is frustrating and a lot of what it churns out is utter rubbish.

    I also keep seeing that writing more detailed specs is the answer and retorts from those saying we’re back to waterfall.

    That isn’t true. I think more of the iteration has moved to the spec. Writing the code is so quick now so can make spec changes you wouldn’t dare before.

    You also need gates like tests and you need very regular commits.

    I’m gradually moving towards more detailed specs in the form of use cases and scenarios along with solid tests and a constantly tuned agent file + guidelines.

    Through this I’m slowly moving back to letting Claude lose on implementation knowing I can do scan of the git diffs versus dealing with a thousand ask before edits and slowing things down.

    When this works you start to see the magic.

  • couchdb_ouchdb 5 hours ago
    Good luck finding an employer that lets you do this moving forward. The new reality is that no one can give the estimates they previously gave for tasks. \

    "Amazingly, I’m faster, more accurate, more creative, more productive, and more efficient than AI, when you price everything in, and not just code tokens per hour."

    For 99.99% of developers this just won't be true.

  • heironimus 3 hours ago
    > It was pure, unadulterated slop. I was bewildered. Had I not reviewed every line of code before admitting it? Where did all this...gunk..come from?

    I chuckled at this. This describes pretty much every large piece of software I've ever worked on. You don't need an LLM to create a giant piece of slop. To avoid it takes tons of planning, refinement, and diligence whether it's LLM's or humans writing it.

  • TrackerFF 5 hours ago
    I wish more critics would start to showcase examples of code slop. I'm not saying this because I defend the use of AI-coding, but rather because many junior devs. that read these types articles/blog posts may not know what slop is, or what it looks like. Simply put, you don't know what you don't know.
  • AJ007 3 hours ago
    One thing that's consistent with AI negative/doesn't work/is slop posts: they don't tell you want models they are using.
  • globular-toast 3 hours ago
    It took me about two weeks to realise this. I still use LLMs, but it's just a tool. Sometimes it's the right tool, but often it isn't. I don't use an SDS drill to smooth down a wall. I use sandpaper and do it by hand.
  • aaroninsf 3 hours ago
    Rants like this are - entirely correct in describing frustration - reasonable in their conclusions with respect to how and when to work with contemporary tools - entirely incorrect in intuition about whether "writing by hand" is a viable path or career going forward

    Like it not, as a friend observed, we are N months away a world where most engineers never looks at source code; and the spectrum of reasons one would want to will inexorably narrow.

    It will never be zero.

    But people who haven't yet typed a word of code never will.

  • joomy 6 hours ago
    The title alone reads like the "digging for diamonds" meme.
  • wahnfrieden 4 hours ago
    Claude Code slOpus user. No surprise this is their conclusion.
  • coldtea 5 hours ago
    He might be coding by hand again, but the article itself is AI slop
  • naikrovek 7 hours ago
    two years of vibecoding experience already?

    his points about why he stopped using AI: these are the things us reluctant AI adopters have been saying since this all started.

    • SiempreViernes 6 hours ago
      The practice is older than the name, which is usually the way: first you start doing something frequently enough you need to name it, then you come up with the name.
  • dev1ycan 4 hours ago
    I vibe coded for a while (about a year) it was just so terrible for my ability to do anything, it started becoming recurring that I couldn't control my timelines because I would get into a loop where I would keep asking AI to "fix" things I didn't actually understand and had no mental capacity to actually read 50k lines of LLM generated code compared to if I had done it from scratch so I would keep and keep going.

    Or how I would start spamming SQL scripts and randomly at some point nuke all my work (happened more than once)... luckily at least I had backups regularly but... yeah.

    I'm sorry but no, LLMs can't replace software engineers.

  • thecommakozzi 4 hours ago
    [dead]
  • jed_fred_lead 3 hours ago
    [dead]
  • ChicagoDave 6 hours ago
    Everything the OP says can be true, but there’s a tipping point where you learn to break through the cruft and generate good code at scale.

    It requires refactoring at scale, but GenAI is fast so hitting the same code 25 times isn’t a dealbreaker.

    Eventually the refactoring is targeted at smaller and smaller bits until the entire project is in excellent shape.

    I’m still working on Sharpee, an interactive fiction authoring platform, but it’s fairly well-baked at this point and 99% coded by Claude and 100% managed by me.

    Sharpee is a complex system and a lot of the inner-workings (stdlib) were like coats of paint. It didn’t shine until it was refactored at least a dozen times.

    It has over a thousand unit tests, which I’ve read through and refactored by hand in some cases.

    The results speak for themselves.

    https://sharpee.net/ https://github.com/chicagodave/sharpee/

    It’s still in beta, but not far from release status.