Is it a bubble?

(oaktreecapital.com)

284 points | by saigrandhi 1 day ago

59 comments

  • sp4cec0wb0y 1 day ago
    > In many advanced software teams, developers no longer write the code; they type in what they want, and AI systems generate the code for them.

    What a wild and speculative claim. Is there any source for this information?

    • sethammons 1 day ago
      At $WORK, we have a bot that integrates with Slack that sets up minor PRs. Adjusting tf, updating endpoints, adding simple handlers. It does pretty well.

      Also in a case of just prose to code, Claude wrote up a concurrent data migration utility in Go. When I reviewed it, it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed. I would have written it faster by hand, no doubt. I think I know more now and the calculus may be shifting on my AI usage. However, the following day, my colleague needed a nearly identical temporary tool. A 45 minute session with Claude of "copy this thing but do this other stuff" easily saved them 6-8 hours of work. And again, that was just talking with Claude.

      I am doing a hybrid approach really. I write much of my scaffolding, I write example code, I modify quick things the ai made to be more like I want, I set up guard rails and some tests then have the ai go to town. Results are mixed but trending up still.

      FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.

      • yellow_lead 20 hours ago
        You should periodically ask Claude to review random parts of code to pump your metrics.
        • giancarlostoro 17 hours ago
          Has the net benefit that it points out things that are actually wrong and overlooked.
          • strken 12 hours ago
            AI reviews have the benefit of making me feel like an idiot in one bullet point and then a genius in the next.
          • rasz 16 hours ago
            But also points out tons of your deliberate design choices as bugs, and will recommend removing things it doesnt understand.
            • giancarlostoro 5 hours ago
              Great time to research if those choices are still valid or if there's a better way. In any regard, its just an overview, not a total rewrite from the AI's perspective.
            • rgbrgb 13 hours ago
              just like any junior dev
              • rozap 13 hours ago
                consider rewriting in rust
                • s1mplicissimus 10 hours ago
                  that's gonna be painful, as the borrow checker really trips up LLMs
                  • jmalicki 3 hours ago
                    I do a lot of LLM work in rust, I find the type system is a huge defense against errors and hallucinations vs JavaScript or even Typescript.
        • lovich 14 hours ago
          why periodically? Just set it up in an agentic workflow and have it work until your token limit is hit.

          If companies want to value something as dumb as LoC then they get what they incentivized

      • oneeyedpigeon 9 hours ago
        > we are to leverage AI in everything we do

        Sounds like the extremely well-repeated mistake of treating everything like a nail because hammers are being hyped up this month.

      • shuckles 17 hours ago
        It took me a while to realize you were using "$WORK" as a shell variable, not as a reference to Slack's stock ticker prior to its acquisition by $CRM.
        • re-thc 13 hours ago
          You never know. Could be both.
      • palmotea 13 hours ago
        > FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.

        I've taken some pleasure in having GitHub copilot review whitespace normalization PRs. It says it can't do it, but I hope I get my points anyway.

      • ProllyInfamous 5 hours ago
        This is a great response, even for a blue collar worker understanding none of its complexities (I have no code creation abilities, whatsoever — I can adjust parameters, and that's about it... I am a hardware guy).

        My layperson anecdote about LLM coding is that using Perplexity is the first time I've ever had the confidence (artificial, or not) to actually try to accomplish something novel with software/coding. Without judgments, the LLM patiently attempts to turn my meat-speak into code. It helps explain [very simple stuff I can assure you!] what its language requires for a hardware result to occur, without chastising you. [Raspberry Pi / Arduino e.g.]

        LLMs have encouraged me to explore the inner workings of more technologies, software and not. I finally have the knowledgeable apprentice to help me with microcontroller implementations, albeit slowly and perhaps somewhat dangerously [1].

        ----

        Having spent the majority of my professional life troubleshooting hardware problems, I often benefit from rubber ducky troubleshooting [0], going back to the basics when something complicated isn't working. LLMs have been very helpful in this roleplay (e.g. garage door openers, thermostat advanced configurations, pin-outs, washing machine not working, etc.).

        [0] <https://en.wikipedia.org/wiki/Rubber_duck_debugging>

        [1] "He knows just enough to be dangerous" —proverbial electricians

        ¢¢

        • giardini 29 minutes ago
          As a software guy going way back, this post may be the death knell of software development as I've known it. I have never seen a good hardware guy who could code his way out of a paper bag. If hardware guys succeed in developing software with LLM coding, then it's time to abandon ship (reaches for life preserver pension).
          • ProllyInfamous 15 minutes ago
            I'm'bout'ta flash your PLC Ladder Logic firmwares, friend.

            j/k don't worry I'm an idiot — but somebody else WILL.

        • mrwrong 4 hours ago
          what really comes through in this description is a fear of judgement from other people, which I think is extremely relatable for anyone who's ever posted a question on stack overflow. I don't think it's a coincidence that the popularity of these tools is coinciding with a general atmosphere of low trust and social cohesion in the US and other societies this last decade
          • ProllyInfamous 17 minutes ago
            On her deathbed, years ago, my beloved mother lamented that she often felt mentally bullied by her three brilliant sons [0], even decades into our adulthoods; embarassed, she would censor her own knowledge-seeking from the people she trusted most.

            She didn't live long enough to use ChatGPT [1] (she would have been flabbergasted at its ability to understand people/situations), but even with her "normal" intelligence she would have been a master to its perceptions/trainings.

            [0] "Beyond just teasing."

            [1] We did briefly wordplay with GPT-2 right before she died via thisworddoesnotexist.com exchanges, but nothing conversive.

            ----

            About a year later (~2023), my dentist friend experienced a sudden life change (wife sick @35); in his grieving/soul-seeking, I recommended that he share some of his mental chaos with an LLM, even just if to role-play as his sick family member. Dr. Friend later thanked me for recommending the resource — particularly "the entire lack of any judgments" — and shared his own brilliant discoveries using creative prompt structuring.

            ----

            Particularly as a big dude, it's nice to not always have to be the tough guy, to even admit weakness. Unfortunately I think the overall societal benefits of generative AI are going to increase anti-social behaviour, but it's nice to have a friendly apprentice that knows something about almost everything... any time... any reason.

      • chickensong 1 day ago
        > it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed

        First pass on a greenfield project is often like that, for humans too I suppose. Once the MVP is up, refactor with Opus ultrathink to look for areas of weakness and improvement usually tightens things up.

        Then as you pointed out, once you have solid scaffolding, examples, etc, things keep improving. I feel like Claude has a pretty strong bias for following existing patterns in the project.

      • sbuttgereit 18 hours ago
        I think your experience matches well with mine. There are certain workloads and use cases where these tools really do well and legitimately save time; these tend to be more concise tasks and well defined with good context from which to draw from. The wrong tasking and the results can be pretty bad and a time sink.

        I think the difficulty is exercising the judgement to know where that productive boundary sits. That's more difficult than it sounds because we're not use to adjudicating machine reasoning which can appear human-like ... So we tend to treat it like a human which is, of course, an error.

        • TheOtherHobbes 7 hours ago
          I find ChatGPT excellent for writing scripts in obscure scripting languages - AppleScript, Adobe Cloud products, IntelliJ plugin development, LibreOffice, and others.

          All of these have a non-trivial learning curve and/or poor and patchy docs.

          I could master all of these the hard way, but it would be a huge and not very productive time sink. It's much easier to tell a machine what I want and iterate with error reports if it doesn't solve my problem immediately.

          So is this AGI? It's not self-training. But it is smart enough to search docs and examples and pull them together into code that solves a problem. It clearly "knows" far more than I do in this particular domain, and works much faster.

          So I am very clearly getting real value from it. And there's a multiplier effect, because it's now possible to imagine automating processes that weren't possible before, and glue together custom franken-workflows that link supposedly incompatible systems and save huge amounts of time.

        • returnInfinity 14 hours ago
          My thoughts as well, good at somethings and terrible for somethings and you will lose time.

          Somethings are best written by yourself.

          And this is with the mighty claude opus 4.5

      • blitzar 9 hours ago
        The CEO obviously wants one of those trophies that chatgpt gives out.
    • kscarlet 1 day ago
      The line right after this is much worse:

      > Coding performed by AI is at a world-class level, something that wasn’t so just a year ago.

      Wow, finance people certainly don't understand programming.

      • mcv 23 hours ago
        World class? Then what am I? I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea. I am impressed by its ability to generate and analyse code, but its code almost never works the first time, unless it's trivial boilerplate stuff, and its analysis is wrong half the time.

        It's very useful if you have the knowledge and experience to tell when it's wrong. That is the absolutely vital skill to work with these systems. In the right circumstances, they can work miracles in a very short time. But if they're wrong, they can easily waste hours or more following the wrong track.

        It's fast, it's very well-read, and it's sometimes correct. That's my analysis of it.

        • malfist 23 hours ago
          Is this why AI is telling us our every idea is brilliant and great? Because their code doesn't stand up to what we can do?
          • AmericanOP 18 hours ago
            Whichever PM sold glazing as a core feature should be ejected into space.
        • RHSman2 4 hours ago
          Because people who can’t code but now can have zero understanding of the ‘path to production quality code’

          Of course it is mind blowing for them.

        • formerly_proven 23 hours ago
          Copilot is easily the worst (and probably slowest) coding agent. SOTA and Copilot don't even inhabit similar planes of existence.
          • RobinL 12 hours ago
            I've found Opus 4.5 in copilot to be very impressive. Better than codex CLI in my experience. I agree Copilot definitely used to be absolutely awful.
            • whimsicalism 3 hours ago
              cursor is better than both, i wish this weren’t the case tbph
        • skydhash 18 hours ago
          > I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea

          This sentence and the rest of the post reads like an horoscope advice. Like "It can be good if you use it well, it may be bad if you don't". It's pretty much the same as saying a coin may land on head or on tail.

          • hatthew 17 hours ago
            saying "a coin may land on head or on tail" is useful when other people are saying "we will soon have coins that always land on heads"
            • bdangubic 7 hours ago
              this is doable, you just have to rig the coin
      • sshadmand 1 hour ago
        Finance people are funny. They are so wrong when you hear their logic and references, but I also realized it doesn't matter. It is trends they try to predict, fuzzy directional signals, not facts of the moment.
      • selectodude 23 hours ago
        They don’t. I’ve gone from rickety and slow excel sheets and maybe some python functions to automate small things that I can figure out to building out entire data pipelines. It’s incredible how much more efficient we’ve gotten.
      • clickety_clack 23 hours ago
        Ask ChatGPT “is AI programming world class?”
      • venturecruelty 17 hours ago
        Of course not, why would they? They understand making money, and what makes money right now? What would be antithetical to making money? Why might we be doing one thing and not another? The lines are bright and red and flashing.
    • throwaway2037 17 hours ago
      I completely agree. This guy is way outside his area of expertise. For those unaware, Howard Marks is a legendary investment manager with a decades-long impressive track record. Additionally, these "insights" letters are also legendary in the money management business. Personally, I would say his wisdom is one notch below Warren Buffett. I am sure he is regularly asked (badgered?) by investors what he thinks about the current state and future of AI (LLMs) and how it will impact his investment portfolio. The audience of this letter is investors (real and potential), as well as other investment managers.
      • throwaway2037 17 hours ago
        Follow-up: This letter feels like a "jump the shark" moment.

        Ref: https://blog.codinghorror.com/has-joel-spolsky-jumped-the-sh...

        • dmurvihill 1 hour ago
          It's funny, because this decision by Joel in 2006 prefigures TypeScript six years later. VBA was a terrible bet for a target language and Joel was crazy to think his little company could sustain a language ecosystem, but Microsoft had the same idea and nailed it.
        • urxvtcd 11 hours ago
          First time reading this. It's actually funny how disliking exceptions seemed crazy then but it's pretty normal now. And writing a new programming language for a certain product, well, it could turn out to be pretty cool, right? It's how we get all those Elms and so on.
          • alterom 6 hours ago
            That's how we got Rust.
    • whoknowsidont 1 day ago
      It's not. And if your team is doing this you're not "advanced."

      Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.

      Which is great! But it's not a +1 for AI, it's a -1 for them.

      • NewsaHackO 1 day ago
        Part of the issue is that I think you are underestimating the number of people not doing "advanced" programming. If it's around ~80-90%, then that's a lot of +1s for AI
        • friendzis 11 hours ago
          Wrong. 80% of code not being advanced is quite strictly not the same as 80% people not doing advanced programming.
          • NewsaHackO 4 hours ago
            I completely understand the difference, and I am standing by my statement that 80-90% of programmers are not doing advanced programming at all.
        • whoknowsidont 22 hours ago
          Why do you feel like I'm underestimating the # of people not doing advanced programming?
          • NewsaHackO 22 hours ago
            Theoretically, if AI can do 80-90% of programming jobs (the ones not in the "advanced" group), that would be an unequivocal +1 for AI.
            • whoknowsidont 21 hours ago
              I think you're crossing some threads here.
              • NewsaHackO 21 hours ago
                "It's not. And if your team is doing this you're not "advanced." Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.

                Which is great! But it's not a +1 for AI, it's a -1 for them.

                " Is you, right?

                • whoknowsidont 20 hours ago
                  Yes. You can see my name on the post.
                  • NewsaHackO 20 hours ago
                    OK, just making sure. Have a blessed day :)
      • 9rx 20 hours ago
        It's true for me. I type in what I want and then the AI system (compiler) generates the code.

        Doesn't everyone work that way?

        • zahlman 19 hours ago
          Describing a compiler as "AI" is certainly a take.
          • conradev 12 hours ago
            I used to hand roll the assembly, but now I delegate that work to my agent, clang. I occasionally override clang or give it hints, but it usually gets it right most of the time.

            clang doesn't "understand" the hints because it doesn't "understand" anything, but it knows what to do with them! Just like codex.

            • lm28469 11 hours ago
              Given an input clang will always give the same output, not quite the same for llms. Also nobody ever claimed compilers were intelligent or that they "understood" things
              • conradev 27 minutes ago
                The determinism depends on the architecture of the model!

                Symbolica is working on more deterministic/quicker models: https://www.symbolica.ai

              • 9rx 6 hours ago
                An LLM will also give the same output for the same input when the temperature is zero[1]. It only becomes non-deterministic if you choose for it to be. Which is the same for a C compiler. You can choose to add as many random conditionals as you so please.

                But there is nothing about a compiler that implies determinism. A compiler is defined by function (taking input on how you want something to work and outputting code), not design. Implementation details are irrelevant. If you use a neural network to compile C source into machine code instead of more traditional approaches, it most definitely remains a compiler. The function is unchanged.

                [1] "Faulty" hardware found in the real world can sometimes break this assumption. But a C compiler running on faulty hardware can change the assumption too.

                • whimsicalism 2 hours ago
                  currently LLMs from majorvproviders are not deterministic with temp=0, there are startups focusing on this issue (among others) https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
                • lm28469 6 hours ago
                  You can test that yourself in 5 seconds and see that even at a temp of 0 you never get the same output
                  • 9rx 5 hours ago
                    Works perfectly fine for me.

                    Did you do that stupid HN thing where you failed to read the entire comment and then went off to try it on faulty hardware?

                    • lm28469 5 hours ago
                      No I did that HN thing where I went to an LLM, set temp to 0, pasted your comments in and got widely different outputs every single time I did so
              • bewo001 8 hours ago
                Hm, some things compilers do during optimization would have been labelled AI during the last AI bubble.
          • agumonkey 19 hours ago
            it's something that crossed my mind too honestly. natural-language-to-code translation.
            • skydhash 18 hours ago
              You can also do search query to code translation by using GitHub or StackOverflow.
          • parliament32 18 hours ago
            Compilers are probably closer to "intelligence" than LLMs.
        • rfrey 17 hours ago
          I understand what you're getting at, but compilers are deterministic. AI isn't just another tool, or just a higher level of program specification.
          • 7952 10 hours ago
            This is all a bit above my head. But the effects a compiler has on the computer are certainly not deterministic. It might do what you want or it might hit a weird driver bug or set off a false positive in some security software. And the more complex stacks get the he more this happens.
          • dust42 11 hours ago
            And so is "AI". Unless you add randomness AKA raise the temperature.
            • rfrey 3 hours ago
              If you and I put the same input into GCC, we will get the same output (counting flags and config as input). The same is not true for an LLM.
              • 9rx 1 hour ago
                > The same is not true for an LLM.

                Incorrect. LLMs are designed to be deterministic (when temperature=0). Only if you choose for them to be non-deterministic are they so. Which is no different in the case of GCC. You can add all kinds of random conditionals if you had some reason to want to make it non-deterministic. You never would, but you could.

                There are some known flaws in GPUs that can break that assumption in the real world, but in theory (and where you have working, deterministic hardware) LLMs are absolutely deterministic. GCC also stops being deterministic when the hardware breaks down. A cosmic bit flip is all it takes to completely defy your assertion.

          • 9rx 16 hours ago
            > but compilers are deterministic.

            Are they, though? Obviously they are in some cases, but it has always been held that a natural language compiler is theoretically possible. But a natural language compiler cannot be deterministic, fundamentally. It is quite apparent that determinism is not what makes a compiler.

            In fact, the dictionary defines compiler as: "a program that converts instructions into a machine-code or lower-level form so that they can be read and executed by a computer." Most everyone agrees that it is about function, not design.

            > AI isn't just another tool

            AI is not a tool, that is true. I don't know, maybe you stopped reading too soon, but it said "AI systems". Nobody was ever talking about AI. If you want to participate in the discussions actually taking place, not just the one you imagined in your head, what kind of system isn't just another tool?

            • rfrey 3 hours ago
              > Nobody was ever talking about AI. If you want to participate in the discussions actually taking place, not just the one you imagined in your head

              Wow. No, I actually don't want to participate in a discussion where the default is random hostility and immediate personal attack. Sheesh.

              • 9rx 2 hours ago
                You don't want to participate, so you continue to participate? Uhh... Thanks for clearing up that you are not coming here from a place of logic, just bad faith emotionalism. We almost were starting to think you had something of value to add.
      • XenophileJKO 18 hours ago
        I beginning to think most "advanced" programmers are just poor communicators.

        It really comes mostly down to being able to concisely and eloquently define what you want done. It also is important to understand what the default tendencies and biases of the model are so you know where to lean in a little. Occasionally you need to provide reference material.

        The capabilities have grown dramatically in the last 6 months.

        I have an advantage because I have been building LLM powered products so I know mechanically what they are and are not good with. For example.. want it to wire up an API with 250+ endpoints with a harness? You better create (or have it create) a way to cluster and audit coverage.

        Generally the failures I hear often with "advanced" programmers are things like algorithmic complexity, concurrency, etc.. and these models can do this stuff given the right motivation/context. You just need to understand what "assumptions" the model it making and know when you need to be explicit.

        Actually one thing most people don't understand is they try to say "Do (A), Don't do (B)", etc. Defining granular behavior which is fundamentally a brittle way to interact with the models.

        Far more effective is defining the persona and motivation for the agent. This creates the baseline behavior profile for the model in that context.

        Not "don't make race conditions", more like "You value and appreciate elegant concurrent code."

        • tjr 14 hours ago
          Some of the best programmers I know are very good at writing and/or speaking and teaching. I struggle to believe that “advanced programmers” are poor communicators.
          • XenophileJKO 13 hours ago
            Genuine reflection question, are these excellent communicators good at using llms to write code?

            My supposition was: Many programmers that say their programming domain was too advanced and llms didn't work for their kind of code are simply bad at describing concisely what is required.

            • tjr 13 hours ago
              Most good programmers that I know personally work, as do I, in aerospace, where LLMs have not been adopted as quickly as some other fields, so I honestly couldn’t say.
        • interstice 15 hours ago
          > I beginning to think most "advanced" programmers are just poor communicators.

          This is a interesting take take considering that programmers are experts in communicating what someone has asked for (however vaguely) into code.

          I think you're referring to is the transition from 'write code that does X' which is very concrete to 'trick an AI into writing the code I would have written, only faster', which feels like work that's somewhere between an art form and asking a magic box to fix things over and over again until it stops being broken (in obvious ways, at least).

          Understandably people that prefer engineered solutions do not like the idea of working this way very much.

          • XenophileJKO 13 hours ago
            When you oversee a team technically as a tech lead or an architect, you need communication skills.

            1. Basing on how the engineer just responded to my comment, what is the understanding gap?

            2. How do I describe what I want in a concise and intuitive way?

            3. How do I tell an engineer what is important in this system and what are the constraints?

            4. What assumptions will an engineer likely make that are will cause me to have to make a lot of corrections?

            Etc.. this is all human to human.

            These skills are all transferrable to working with an LLM.

            So I guess if you are not used to technical leadership, you may not have used those skills as much.

            • interstice 1 hour ago
              The issue here is that LLM’s are not human and so having a human mental model of how to communicate doesn’t really work. If I communicate to my engineer to do X I know all kinds of things about them, like their coding style, strengths and weaknesses, and that they have some familiarity with the code they are working with and won’t bring the entirety of stack overflow answers to the context we are working in. LLM’s are nothing like this even when working with large amounts of context, they fail in extremely unpredictable ways from one prompt to the next. If you disagree I’d be interested in what stack or prompting you are using that avoids this.
        • mjr00 16 hours ago
          > It really comes mostly down to being able to concisely and eloquently define what you want done.

          We had a method for this before LLMs; it was called "Haskell".

        • XenophileJKO 18 hours ago
          One added note. This rigidness of instruction is a real problem that the models themselves will magnify and you need to be aware of. For example if you ask a Claude family of models to write a sub-agent for you in Claude Code. 99% of the time it will define a rigid process with steps and conditions instead of creating a persona with motivations (and if you need it suggested courses of action).
    • projektfu 17 hours ago
      I have heard many software developers confidently tell me "pilots don't really fly the planes anymore" and, well, that's patently false but also the jetliners autopilots do handle much of the busy work during cruise, and sometimes during climb-out and approach. And they can sometimes land themselves, but not efficiently enough for a busy airport.
      • coffeebeqn 9 hours ago
        Autopilot based on a LLM would guarantee I’d never fly again
    • its_ethan 20 hours ago
      Is it not sort of implied by the stats later: "Revenues from Claude Code, a program for coding that Anthropic introduced earlier this year, already are said to be running at an annual rate of $1 billion. Revenues for the other leader, Cursor, were $1 million in 2023 and $100 million in 2024, and they, too, are expected to reach $1 billion this year."

      Surely that revenue is coming from people using the services to generate code? Right?

      • Windchaser 19 hours ago
        A back-of-the-napkin estimate of software developer salaries:

        There are some ~1.5 million software developers in the US per BLS data, or ~4 million if using a broader definition Median salary is $120-140k. Let's say $120k to be conservative.

        This puts total software developer salaries at $180 billion.

        So, that puts $1 billion in Claude revenue in perspective; only about 0.5% of software developer salaries. Even if it only improved productivity 5%, it'd be paying for itself handily - which means we can't take the $1 billion in revenues to indicate that it's providing a big boost in productivity.

        • dmurvihill 12 hours ago
          If it makes a 5% improvement, that would make it a $9 billion dollar per year industry. What’s our projected capex for AI projects next five years again?
        • lovich 14 hours ago
          You are ignoring costs

          The AI companies are currently lighting dollars on fire if you pay them a few pennies to do so.

          The AI models are actually accomplishing something, but the unit economics aren't there to support it being profitable

      • browningstreet 19 hours ago
        Generating code isn’t the same as running it, running it on production, and living with it over time.

        In time I’m sure it will, but it’s still early days, land grab time.

      • halfcat 19 hours ago
        > Surely that revenue is coming from people using the services to generate code? Right?

        Yes. And all code is tech debt. Now generated faster than ever.

        • jv22222 12 hours ago
          Hmm maybe that’s a bit reductive? I’ve used claud to help with some really great refactoring sessions tbh.
    • brulard 1 day ago
      I'm on a team like that and I see it happening in more and more companies around. Maybe "many" does a heavy lifting in the quoted text, but it is definitely happening.
    • loloquwowndueo 1 day ago
      Probably their googly-eyed vibe coder friend told them this and they just parroted it.
      • RajT88 1 day ago
        Right. The author is non-technical and said so up front.
    • interstice 1 day ago
      If true I’d like to know who is doing this so I can have exactly nothing to do with them.
    • 20after4 22 hours ago
      I've had claude code compose complex AWS infrastructure (using pulumi IAC) that mostly works from a one-shot prompt.
    • PurpleRamen 9 hours ago
      Yes and no. There is the infamous quote of Microsoft, about 30%(?) of their code being written by AI now. And technically, it's probably not that such a wild claim in certain areas. AI is very good at barfing up common popular patterns, and companies have a huge amount of patternized software, like UIs, tests, documentation or marketing-fluff. So it's quite easy to "outsource" such grunt-work if AI has the necessary level.

      But to say that they don't write any code at all, it's really stretched. Maybe I'm not good enough at AI-assisted and vibe coding, but code-quality always seems to drop down really hard the moment one steps a bit outside the common patterns.

      • grumbelbart2 9 hours ago
        I found LLLMs to be very good of writing (unit) tests for my code, for example. They just don't get tired iterating over all corner cases. Those tests easily, in LoC, dwarf the actual implementation. Not sure if that would count towards the 30%, for example.
    • no_wizard 18 hours ago
      Here's the lede they buried:

      >The key is to not be one of the investors whose wealth is destroyed in the process of bringing on progress.

      They are a VC group. Financial folks. They are working largely with other people's money. They simply need not hold the bag to be successful.

      Of course they don't care if its a bubble or not, at the end of the day, they only have to make sure they aren't holding the bag when it all implodes.

      • venturecruelty 17 hours ago
        They have "capital" in their domain name. Of course they're going to be, well... on the side of capital. This shouldn't be hotly debated... "Mining company says mine they own is full of ore and totally not out of ore."
    • whimsicalism 13 hours ago
      Wow, reading these comments and I feel like I've entered a parallel reality. My job involves implementing research ML and I use it literally all the time, very fascinating to see how many have such strong negative reactions. As long as you are good at reviewing code, spec-ing carefully, and make atomic changes - why would you not be using this basically all the time?
      • kkapelon 58 minutes ago
        > As long as you are good at reviewing code, spec-ing carefully, and make atomic changes - why would you not be using this basically all the time?

        This implies that you are an expert/seasoned programmer. And not everybody is an expert on this industry (especially the reviewing code part).

        • whimsicalism 40 minutes ago
          I thought this was a forum for seasoned engineers? But yes, I agree that this widens the skill gap and makes the on-ramp steeper.
      • qsort 10 hours ago
        It's one of the failure modes of online forums. Everyone piles on and you get an unrealistic opinion sample. I'm not exactly trying to shove AI into everything, I'm weary of over hyping and mostly conservative in my technology choices. Still, I get a lot out of LLMs and agents for coding tasks.
        • whimsicalism 4 hours ago
          i have trouble understanding how a forum of supposedly serious coders can be so detached from reality, but I do know that this is one of HN’s pathologies
          • qsort 4 hours ago
            I think it's more of a thread-bound dynamic rather than HN as a whole. If the thread starts positive you get "AGI tomorrow", if the thread starts negative you get "stochastic parrot".

            But I see what you mean, there have been at least a few insane comment sections for sure.

      • LtWorf 12 hours ago
        Because carefully spec-ing to the level an llm needs, and ultra carefully checking the output is easily slower and more tiring than just doing it yourself.

        Kinda like having a child "help" you cook basically.

        But for the child you do it because they actually learn. llms do not learn in that sense.

        • whimsicalism 4 hours ago
          not at all true for the latest generation of models in my experience. they are overly verbose but except for the simplest simplest changes it is faster to ask first
          • LtWorf 4 hours ago
            For the simplest changes you have to first review the code fully, ask for the change, do a new full review and so on.
            • whimsicalism 3 hours ago
              no, you just have to ask for the change - wait ~minute, review. and if it’s a small change, review goes fast. typically i’ll have a zellij/tmux with lazygit one pane, a cli agent (cursor-agent or codex) in the other, and a pop up vim pane. i can see the changes in lazygit as they’re made and review immediately and commit
    • agumonkey 19 hours ago
      Seen it first hand. scan your codebase, plan extension or rewrite or both, iterate with some hand holding and off you go. And it was not even an advanced developer driving the feature (which is concerning).
    • Illniyar 17 hours ago
      I think he might be misrepresenting it a bit, but from what I've seen every software company I know of heavily uses agentic AI to create code (except some highly regulated industries).

      It has become a standard tool, in the same way that most developers code with an IDE, most developers use agentic AI to start a task (if not to finish it).

    • stretchwithme 16 hours ago
      It's often true. But not when it's easier to code than to explain.
    • thenaturalist 11 hours ago
      No, but there are huuuuuge incentives by people publishing such statements.
    • qsort 23 hours ago
      Everyone is doing this extreme pearl clutching around the specific wording. Yeah, it's not 100% accurate for many reasons, but the broader point was about employment effects, it doesn't need to completely replace every single developer to have a sizable impact. Sure, it's not there yet and it's not particularly close, but can you be certain that it will never be there?

      Error bars, folks, use them.

    • AndrewKemendo 12 hours ago
      I just did a review and 16% of our committed production code was generated by an LLM. Almost 80% of our code comments are LLM

      This is mission critical robotics software

      • Zafira 10 hours ago
        What is the approach here? LLM generated; human validated?
    • johnfn 1 day ago
      I only write around 5% of the code I ship, maybe less. For some reason when I make this statement a lot of people sweep in to tell me I am an idiot or lying, but I really have no reason to lie (and I don't think I'm an idiot!). I have 10+ years of experience as an SWE, I work at a Series C startup in SF, and we do XXMM ARR. I do thoroughly audit all the code that AI writes, and often go through multiple iterations, so it's a bit of a more complex picture, but if you were to simply say "a developer is not writing the code", it would be an accurate statement.

      Though I do think "advanced software team" is kind of an absurd phrase, and I don't think there is any correlation with how "advanced" the software you build is and how much you need AI. In fact, there's probably an anti-correlation: I think that I get such great use out of AI primarily because we don't need to write particularly difficult code, but we do need to write a lot of it. I spend a lot of time in React, which AI is very well-suited to.

      EDIT: I'd love to hear from people who disagree with me or think I am off-base somehow about which particular part of my comment (or follow-up comment https://news.ycombinator.com/item?id=46222640) seems wrong. I'm particularly curious why when I say I use Rust and code faster everyone is fine with that, but saying that I use AI and code faster is an extremely contentious statement.

      • MontyCarloHall 1 day ago
        >I only write around 5% of the code I ship, maybe less.

        >I do thoroughly audit all the code that AI writes, and often go through multiple iterations

        Does this actually save you time versus writing most of the code yourself? In general, it's a lot harder to read and grok code than to write it [0, 1, 2, 3]. For me, one of the biggest skills for using AI to efficiently write code is a) chunking the task into increments that are both small enough for me to easily grok the AI-generated code and also aligned enough to the AI's training data for its output to be ~100% correct, b) correctly predicting ahead of time whether reviewing/correcting the output for each increment will take longer than just doing it myself, and c) ensuring that the overhead of a) and b) doesn't exceed just doing it myself.

        [0] https://mattrickard.com/its-hard-to-read-code-than-write-it

        [1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...

        [2] https://trishagee.com/presentations/reading_code/

        [3] https://idiallo.com/blog/writing-code-is-easy-reading-is-har...

        • johnfn 1 day ago
          Yes, I save an incredible amount of time. I suspect I’m likely 5-10x more productive, though it depends exactly what I’m working on. Most of the issues that you cite can be solved, though it requires you to rewire the programming part of your brain to work with this new paradigm.

          To be honest, I don’t really have a problem with chunking my tasks. The reason I don’t is because I don’t really think about it that way. I care a lot more about chunks and AI could reasonably validate. Instead of thinking “what’s the biggest chunk I could reasonably ask AI to solve” I think “what’s the biggest piece I could ask an AI to do that I can write a script to easily validate once it’s done?” Allowing the AI to validate its own work means you never have to worry about chunking again. (OK, that's a slight hyperbole, but the validation is most of my concern, and a secondary concern is that I try not to let it go for more than 1000 lines.)

          For instance, take the example of an AI rewriting an API call to support a new db library you are migrating to. In this case, it’s easy to write a test case for the AI. Just run a bunch of cURLs on the existing endpoint that exercise the existing behavior (surely you already have these because you’re working in a code base that’s well tested, right? right?!?), and then make a script that verifies that the result of those cURLs has not changed. Now, instruct the AI to ensure it runs that script and doesn’t stop until the results are character for character identical. That will almost always get you something working.

          Obviously the tactics change based on what you are working on. In frontend code, for example, I use a lot of Playwright. You get the idea.

          As for code legibility, I tend to solve that by telling the AI to focus particularly on clean interfaces, and being OK with the internals of those interfaces be vibecoded and a little messy, so long as the external interface is crisp and well-tested. This is another very long discussion, and for the non-vibe-code-pilled (sorry), it probably sounds insane, and I feel it's easy to lose one's audience on such a polarizing topic, so I'll keep it brief. In short, one real key thing to understand about AI is that it makes the cost of writing unit tests and e2e tests drop significantly, and I find this (along with remaining disciplined and having crisp interfaces) to be an excellent tool in the fight against the increased code complexity that AI tools bring. So, in short, I deal with legibility by having a few really really clean interfaces/APIs that are extremely readable, and then testing them like crazy.

          EDIT

          There is a dead comment that I can't respond to that claims that I am not a reliable narrator because I have no A/B test. Behold, though: I am the AI-hater's nightmare, because I do have a good A/B test! I have a website that sees a decent amount of traffic (https://chipscompo.com/). Over the last few years, I have tried a few times to modernize and redesign the website, but these attempts have always failed because the website is pretty big (~50k loc) and I haven't been able to fit it in a single week of PTO.

          This Thanksgiving, I took another crack at it with Claude Code, and not only did I finish an entire redesign (basically touched every line of frontend code), but I also got in a bunch of other new features, too, like a forgot password feature, and a suite of moderation tools. I then IaC'd the whole thing with Terraform, something I only dreamed about doing before AI! Then I bumped React a few majors versions, bumped TS about 10 years, etc, all with the help of AI. The new site is live and everyone seems to like it (well, they haven't left yet...).

          If anything, this is actually an unfair comparison, because it was more work for the AI than it was for me when I tried a few years ago, because because my dependencies became more and more out of date as the years went on! This was actually a pain for AI, but I eventually managed to solve it.

          • no_wizard 18 hours ago
            Use case mapping matters. I use AI tools at work (have for a few years now, first Copilot from GitHub, now I use Gemini and Claude tools primarily). When the use case maps well, it is great. You can typically assume anything with a large corpus of fairly standard problems will map well in a popular language. JavaScript, HTML, CSS, these have huge training datasets from open source alone.

            The combination of which, deep training dataset + maps well to how AI "understands" code, it can be a real enabler. I've done it myself. All I've done with some projects is write tests, point Claude at the tests and ask it to write code till those tests pass, then audit said code, make adjustments as required, and ship.

            That has worked well and sped up development of straightforward (sometimes I'd argue trivial) situations.

            Where it falls down is complex problem sets, major refactors that cross cut multiple interdependent pieces of code, its less robust with less popular languages (we have a particular set of business logic in Rust due to its sensitive nature and need for speed, it does a not great job with that) and a host of other areas I have hit limitations with it.

            Granted, I work in a fairly specialized way and deal with alot of business logic / rules rather than boiler plate CRUD, but I have hit walls on things like massive refactors in large codebases (50K is small to me, for reference)

          • n8cpdx 23 hours ago
            Did you do 5-10 years of work in the year after you adopted AI? If you started after AI came in to existence 3 years ago (/s) you should have achieved 30 years of work output - a whole career of work.
            • johnfn 23 hours ago
              I think AI only "got good" around the release of Claude Code + Opus 4.0, which was around March of this year. And it's not like I sit down and code 8 hours a day 5 days a week. I put on my pants one leg at a time -- there's a lot of other inefficiencies in the process, like meetings, alignment, etc, etc.

              But yes, I do think that the efficiency gain, purely in the domain of coding, is around 5x, which is why I was able to entirely redesign my website in a week. When working on personal projects I don't need to worry about stakeholders at all.

              • jimbokun 19 hours ago
                Ah, I was going to say it’s impossible to get 5x increase in productivity, because writing code takes up less than 20% of a developer’s time. But I can understand that kind of improvement on just the coding part.

                The trick now is deciding what code to write quickly enough to keep Claude and friends busy.

                • XenophileJKO 17 hours ago
                  I will say for example now at work.. if I see a broken window I have an AI fix it. This is a recent habit for me, so I can't say it will stick, but I'm fixing issues in many more adjacent code bases then I normally would.

                  It used to be "hey I found an issue..", now it is like "here is a pr to fix an issue I saw". The net effort to me is only slightly more. I usually have to identify the problem and that is like 90% of fixing it.

                  Add to the fact that now I can have an AI take a first pass at identifying the problem with probably an 80%+ success rate.

              • Esophagus4 19 hours ago
                I'm not sure why, but it seems like your comment really brought out the ire in a few commenters here to discredit your experience.

                Is it ego? Defensiveness? AI anxiety? A need to be the HN contrarian against a highly visible technology innovation?

                I don't think I understand... I haven't seen the opposite view (AI wastes a ton of time) get hammered like that.

                At the very least, it certainly makes for an acidic comments section.

                • n8cpdx 19 hours ago
                  It’s because people turn off their critical thinking and make outrageous claims.

                  That’s why when folks say that AI has made them 10x more productive, I ask if they did 10 years worth of work in the last year. If you cannot make that claim, you were lying when you said it made you 10x more productive. Or at least needed a big asterisk.

                  If AI makes you 10x more productive in a tiny portion of your job, then it did not make you 10x more productive.

                  Meanwhile, the people claiming 10x productivity are taken at face value by people who don’t know any better, and we end up in an insane hype cycle that has obvious externalities. Things like management telling people that they must use AI or else. Things like developer tooling making zero progress on anything that isn’t an AI feature for the last two years. Things like RAM becoming unaffordable because Silicon Valley thinks they are a step away from inventing god. And I haven’t scratched the surface.

                  • johnfn 19 hours ago
                    But I really did do around 4 to 5 weeks of work in a single week on my personal site. At this point you just seem to be denying my own reality.
                    • n8cpdx 17 hours ago
                      If you read my comments, you’ll see that I did no such thing. I asked if you did 5-10 years of work in the last year (or 5-10 weeks of work in the last week) and didn’t get a response until you accused me of denying your reality.

                      You’ll note the pattern of the claims getting narrower and narrower as people have to defend them and think critically about them (5-10x productivity -> 4-5x productivity -> 4-5x as much code written on a side project).

                      It’s not a personal attack, it is a corrective to the trend of claiming 5,10,100x improvements to developer productivity, which rarely if ever holds up to scrutiny.

                      • johnfn 16 hours ago
                        What you are seeing is the difference between what I personally feel and what I could objectively prove to an AI skeptic.

                        If I have to "prove" my productivity in a court of law - that is to say, you - I'll down-modulate it to focus on the bits that are most objective, because I understand you will be skeptical. For instance, I really do think I'm 10x faster with Terraform, because I don't need to read all the documentation, and that would have taken absurd amounts of time. There were also a few nightmarish bugs that I feel could have taken me literally hours or infinity (I would have just given up), like tracking down a breaking change snuck in in a TS minor update when I upgraded from 2.8 to latest, that Codex chomped through. But I imagine me handwaving "it's definitely 10x, just trust me" on those ones, where the alternatives aren't particularly clear, might not be an argument you'd readily accept. On the other hand, the 5x gains when writing my website, using tech I know inside and out, felt objective.

                        • irishcoffee 14 hours ago
                          > For instance, I really do think I'm 10x faster with Terraform, because I don't need to read all the documentation, and that would have taken absurd amounts of time.

                          I think this is where the lede is buried. Yes, it takes time up front. But then you learn(ed) it and can apply those skills quickly in the future.

                          In 10 years when all sorts of new tech is around, will you read the docs? Or just count on an LLM?

                          • johnfn 14 hours ago
                            I mean, in my comment I did say that an AI skeptic probably wouldn't buy that argument. So I'm not too surprised that you're not buying it.

                            That being said, I have taught myself a ridiculous amount of tech with AI. It's not always great at depth, but it sure is amazing at breadth. And I can still turn to docs for depth when I need to.

                  • Esophagus4 19 hours ago
                    > That’s why when folks say that AI has made them 10x more productive, I ask if they did 10 years worth of work in the last year.

                    What makes you think one year is the right timeframe? Yet you seem to be so wildly confident in the strength of what you think your question will reveal… in spite of the fact that the guy gave you an example.

                    It wasn’t that he didn’t provide it, it was that you didn’t want to hear it.

                    • n8cpdx 17 hours ago
                      It’s a general question I ask of everyone who claims they are 10x more productive. Year/month/day/hour doesn’t matter. Did you do 10 days of work yesterday? 10 weeks of work last week?

                      It is actually a very forgiving metric over a year because it is measuring only your own productivity relative to your personal trend. That includes vacation time and sick time, so the year smooths over all the variation.

                      Maybe he did do 5 weeks of work in 1 week, and I’ll accept that (a much more modest claim than the usual 10-100x claimed multiplier).

                      • Esophagus4 16 hours ago
                        Yeah, but he gave you an affirmative answer, that it did make him more productive, and you keep moving the goalposts as I watch.

                        Not only that, I think you're misrepresenting his claim:

                        > I suspect I’m likely 5-10x more productive, though it depends exactly what I’m working on

                        1) He didn't say 10-100x

                        2) He said it depended on the work he was doing

                        Those seem reasonable enough that I can take his experience at face value.

                        This isn't about you pressure testing his claim, this is about you just being unwilling to believe his experience because it doesn't fit the narrative you've already got in your head.

                  • rhetocj23 19 hours ago
                    [dead]
          • IceDane 19 hours ago
            Your site has waterfalls and flashes of unstyled content. It loads slowly and the whole design is basically exactly what every AI-designed site looks like.

            All of the work you described is essentially manual labor. It's not difficult work - just boring, sometimes error prone work that mostly requires you to do obvious things and then tackle errors as they pop up in very obvious ways. Great use case for AI, for sure. This and the fact that the end result is so poor isn't really selling your argument very well, except maybe in the sense that yeah, AI is great for dull work in the same way an excavator is great for digging ditches.

            • ianbutler 15 hours ago
              Let me see your typical manual piece of work, I'm sure I'll be able to tear it apart in a way that really hurts your ego :)
            • johnfn 18 hours ago
              > This and the fact that the end result is so poor isn't really selling your argument very well

              If you ever find yourself at the point where you are insulting a guy's passion project in order to prove a point, perhaps have a deep breath, and take a step back from the computer for a moment. And maybe you should look deep inside yourself, because you might have crossed the threshold to being a jerk.

              Yes, my site has issues. You know what else it has? Users. Your comments about FOUC and waterfalls are correct, but they don't rank particularly high on what are important to people who used the site. I didn't instruct the AI to fix them, because I was busy fixing a bunch of real problems that my actual users cared about.

              As for loading slowly -- it loads in 400ms on my machine.

              • IceDane 18 hours ago
                Look, buddy. You propped yourself up as an Experienced Dev doing cool stuff at Profitable Startup and don't understand Advanced Programming, and your entire argument is that you can keep doing the same sort of high quality(FSOV) work you've been doing the past 10 years with AI, just a lot faster.

                I'm just calling spade a spade. If you didn't want people to comment on your side project given your arguments and the topic of discussion, you should just not have posted it in a public forum or have done better work.

                • johnfn 18 hours ago
                  If I were to summarize the intent of my comments in a single sentence, it would be something like "I have been an engineer for a while, and I have been able to do fun stuff with AI quickly." You somehow managed to respond to that by disparaging me as an engineer ("Experienced Dev") and saying the fun stuff I did is low quality ("should have [...] done better work"). It's so far away from the point I was making, and so wildly negative - when, again, my only intent was to say that I was doing fun AI stuff - that I can't imagine where it originated from. The fact that it's about a passion project is really the cherry on top. Do you tell your kids that their artwork is awful as well?

                  I can understand to some degree it would be chafing that I described myself as working at a SF Series C startup etc. The only intent there was to illustrate that I wasn't someone who started coding 2 weeks ago and had my mind blown by typing "GPT build me a calculator" into Claude. No intent at all of calling myself a mega-genius, which I don't really think I am. Just someone who likes doing fun stuff with AI.

                  And, BTW, if you reread my initial comment, you will realize you misread part of it. I said that "Advanced Programming" is the exact opposite of the type of work I am doing.

                  • IceDane 12 hours ago
                    Look, I'm not trying to dunk on your website for fun. The issue is that you're making a specific argument: you're an experienced developer who uses AI to be 5-10x more productive without downsides, and you properly audit all the code it generates. You then offered your project as evidence of this workflow in action.

                    The problem is that your project has basic performance issues - FOUC, render waterfalls - that are central concerns in modern React development. These aren't arbitrary standards I invented to be mean. They're fundamental enough that React's recent development has specifically focused on solving them.

                    So when you say I'm inventing quality standards (in your now-deleted comment), or that this is just a passion project so quality doesn't matter, you're missing the point. You can't argue from professional authority that AI makes you more productive without compromise, use your work as proof, and then retreat to "it's just for fun" when someone points out the quality issues. Either it demonstrates your workflow's effectiveness or it doesn't. You can't have it both ways.

                    The kids' artwork comparison doesn't work either. You're not a child showing me a crayon drawing - you're a professional developer using your work as evidence in a technical argument about AI productivity. If you want to be treated as an experienced developer making authoritative claims, your evidence needs to support those claims.

                    I'm genuinely not trying to be cruel here, but if this represents what your AI workflow produces when you're auditing the output, it raises serious questions about whether you can actually catch the problems the AI introduces - which is the entire crux of your argument. Either you just aren't equipped to audit it (because you don't know better), or you are becoming passive in the face of the walls of code that the AI is generating for you.

                    • johnfn 11 hours ago
                      I will accept for the moment that you are not just being willfully cruel.

                      Let's talk a little about FOUC and the waterfall. I am aware of both issues. In fact, they're both on my personal TODO list (along with some other fun stuff, like SSR). I have no doubt I could vibe code them both away, and at some point, I will. I've done plenty harder things. I haven't yet, because I was focusing on stuff that my moderators and users wanted me to do. They wanted features to ban users, a forgot password feature, email notifications, mobile support, dark mode, and a couple of other moderation tools. I added those. No one complained about FOUC or the waterfall, and no one said that the site loaded slowly, so I didn't prioritize those issues.

                      I understand you think your cited issues are important. To be honest, they irk me, too. But no one who actually uses the site mentioned them. So, when forced to prioritize, I added stuff they cared about instead.

                      > You can't argue from professional authority that AI makes you more productive without compromise, use your work as proof, and then retreat to "it's just for fun" when someone points out the quality issues

                      You seem to have missed the point of saying "it's just for fun". My point was this: You are holding a week-long project done with AI to professional standards. Nothing ever done in a week is going to be professional level! That is an absurd standard! You are pointing at the rough edges, that of course exist because it was done on the side, as some insane gotcha that proves the whole thing is a house of cards. "This is "dull work"! You should "have done better work" if you wanted to talk with us"! For FOUC?!? C'mon.

          • samdoesnothing 19 hours ago
            Is your redesign live for chipscompo? Because if so, and absolutely no offence meant here, the UI looks like it was built by an intern. And fair enough, you sound like a backend guy so you can't expect perfection for frontend work. My experience with AI is that it's great at producing intern-level artifacts very quickly and that has its uses, but that certainly doesn't replace 95% of software development.

            And if it's producing an intern-level artifact for your frontend, what's to say it's not producing similar quality code for everything else? Especially considering frontend is often derided as being easier than other fields of software.

            • johnfn 18 hours ago
              Yes, it is live. I never claimed to be a god-level designer - but you should have seen what it looked like before. :)
            • munksbeer 5 hours ago
              >if so, and absolutely no offence meant here, the UI looks like it was built by an intern

              The site looks great to me. Your comment is actually offensive, despite you typing "no offence".

          • dingnuts 23 hours ago
            > Yes, I save an incredible amount of time. I suspect I’m likely 5-10x more productive

            The METR paper demonstrated that you are not a reliable narrator for this. Have you participated in a study where this was measured, or are you just going off intuition? Because METR demonstrated beyond doubt that your intuition is a liar in this case.

            If you're not taking measurements it is more likely that you are falling victim to a number of psychological effects (sunk cost, Gell-Manns, slot machine effect) than it is that your productivity has really improved.

            Have you received a 5-10x pay increase? If your productivity is now 10x mine (I don't use these tools at work because they are a waste of time in my experience) then why aren't you compensated as such and if it's because of pointy haired bosses, you should be able to start a new company with your 10x productivity to shut him and me up.

            Provide links to your evidence in the replies

            • Esophagus4 20 hours ago
              Jeez... this seems like another condescending HN comment that uses "source?" to discredit and demean rather than to seek genuine insight.

              The commenter told you they suspect they save time, it seems like taking their experience at face value is reasonable here. Or, at least I have no reason to jump down their throat... the same way I don't jump down your throat when you say, "these tools are a waste of time in my experience." I assume that you're smart enough to have tested them out thoroughly, and I give you the benefit of the doubt.

              If you want to bring up METR to show that they might be falling into the same trap, that's fine, but you can do that in a much less caustic way.

              But by the way, METR also used Cursor Pro and Claude 3.5/3.7 Sonnet. Cursor had smaller context windows than today's toys and 3.7 Sonnet is no longer state of the art, so I'm not convinced the paper's conclusions are still as valid today. The latest Codex models are exponential leaps ahead of what METR tested, by even their own research.[1]

              [1]https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

            • johnfn 20 hours ago
              > Have you received a 5-10x pay increase?

              Does Amazon pay everyone who receives "Not meeting expectations" in their perf review 0 dollars? Did Meta pay John Carmack (or insert your favorite engineer here) 100x that of a normal engineer? Why do you think that would be?

              • jimbokun 19 hours ago
                I wouldn’t be surprised to find out Carmack was paid 100x more than the average engineer once equity from the acquisition of his company is taken into account.

                Does anyone know how much he made altogether from Meta?

                • keeda 17 hours ago
                  The unfortunate reality of engineering is that we don't get paid proportional to the value we create, even the superstars. That's how tech companies make so much money, after all.

                  If you're climbing the exec ladder your pay will scale a little bit better, but again, not 100x or even 10x. Even the current AI researcher craze is for an extremely small number of people.

                  For some data points, check out levels.fyi and compare the ratio of TCs for a mid-level engineer/manager versus the topmost level (Distinguished SWE, VP etc.) for any given company.

                  • jimbokun 15 hours ago
                    The whole premise of YCombinator is that it’s easier to teach good engineers business than to teach good business people engineering skills.

                    And thus help engineers get paid more in line with their “value”. Albeit with much higher variance.

                    • keeda 14 hours ago
                      I would agree with that premise, but at that point they are not engineers, they are founders! I guess in the end, to capture their full value engineers must escape the bonds of regular employment.

                      Which is not to say either one is better or worse! Regular employment does come with much lower risk, as it is amortized over the entire company, whereas startups are risky and stressful. Different strokes for different folks.

                      I do think AI could create a new paradigm though. With dropping employment and increasing full-stack business capabilities, I foresee a rise in solopreneurship, something I'm trying out myself.

              • 3rodents 20 hours ago
                I disagree with the parent’s premise (that productivity has any relationship to salary) but Facebook, Amazon etc do pay these famous genius brilliant engineers orders of magnitude more than the faceless engineers toiling away in the code mines. See: the 100 million dollar salaries for famous AI names. And that’s why I disagree with the premise, because these people are not being paid based on their “productivity”.
            • mekoka 19 hours ago
              As they said, it depends on the task, so I wouldn't generalize, but based on the examples they gave, it tracks. Even when you already know what needs done, some undertakings involve a lot of yak shaving. I think transitioning to new tools that do the same as the old but with a different DSL (or newer versions of existing tools) qualifies.

              Imagine that you've built an app with libraries A, B, and C and conceptually understand all that's involved. But now you're required to move everything to X, Y, and Z. There won't be anything fundamentally new or revolutionary to learn, but you'll have to sit and read those docs, potentially for hours (cost of task switching and all). Getting the AI to execute the changes gets you to skip much of the tedium. And even though you still don't really know much about the new libs, you'll get the gist of most of the produced code. You can piecemeal the docs to review the code at sensitive boundaries. And for the rest, you'll paint inside the frames as you normally would if you were joining a new project.

              Even as a skeptic of the general AI productivity narrative, I can see how that could squeeze a week's worth of "ever postponed" tasks inside a day.

              • skydhash 18 hours ago
                > but you'll have to sit and read those docs, potentially for hours (cost of task switching and all).

                That is one of the assumptions that pro-AI people always bring. You don't read the new docs to learn the domain. As you've said, you've already learn it. You read it for the gotchas. Because most (good) libraries will provide examples that you can just copy-paste and be done with it. But we all know that things can vary between implementations.

                > Even as a skeptic of the general AI productivity narrative, I can see how that could squeeze a week's worth of "ever postponed" tasks inside a day.

                You could squeeze a week inside a day the normal way to. Just YOLO it, by copy pasting from GitHub, StackOverflow and the whole internet.

          • overfeed 20 hours ago
            > I am the AI-hater's nightmare...

            I-know-what-kind-of-man-you-are.jpeg

            You come off as a zealot by branding people who disagree as "haters".

            Edit: AI excels at following examples, or simple, testable tasks that require persistence, which is intern-level work. Doing this narrow band of work quickly doesn't result in 10x productivity.

            I'm yet to find a single person who has shown evidence to go through 10x more tasks in a sprint[1], or match the output of the rest of their 6-10-member team by themselves.

            1. Even for junior level work

            • johnfn 20 hours ago
              Did you see the comment that I was responding to? It said "your intuition is a liar" and said they would only believe me if I was compensated 10x a normal engineer. If that's not the comment of a hater, I'm not sure what qualifies.

              > I'm yet to find a single person who has shown evidence to go through 10x more tasks in a sprint[1], or match the output of the rest of their 6-10-member team by themselves.

              If my website, a real website with real users, doesn't qualify, then I'm not sure what would. A single person with evidence is right in front of you, but you seem to be denying the evidence of your own eyes.

        • lowbloodsugar 23 hours ago
          a) is exactly what AI is good at. b) is a waste of time: why would you waste your precious time trying to predict a result when you can just get the result and see.

          You are stuck in a very low local maximum.

          You are me six months ago. You don’t know how it works, so you cannot yet reason about it. Unlike me, you’ve decided “all these other people who say it’s effective are making it up”. Instead ask, how does it work? What am I missing.

      • 3rodents 1 day ago
        I regularly try to use various AI tools and I can imagine it is very easy for it to produce 95% of your code. I can also imagine you have 90% more code than you would have had you written it yourself. That’s not necessarily a bad thing, code is a means to an end, and if your business is happy with the outcomes, great, but I’m not sure percentages of code are particularly meaningful.

        Every time I try to use AI it produces endless code that I would never have written. I’ve tried updating my instructions to use established dependencies when possible but it seems completely averse.

        An argument could be made that a million lines isn’t a problem now that these machines can consume and keep all the context in memory — maybe machines producing concise code is asking for faster horses.

      • foobarian 20 hours ago
        I'm on track to finish my current gig having written negative lines of code. It's amazing how much legacy garbage long running codebases can accumulate, and it's equally amazing how much it can slow down development (and, conversely, how much faster development can become if legacy functionality is deleted).
        • skydhash 18 hours ago
          Pretty much the same. And it's not even about improving the code (which I did), but mostly about removing dead code and duplicated code. Or worse, half redesigns of some subsystem which led to very bizarre code.

          When people say coding is slow, that usually means they're working on some atrocious code (often of their own making), while using none of the tools for fast feedback (Tests, Linters,...).

      • ipdashc 14 hours ago
        > I'm particularly curious why when I say I use Rust and code faster everyone is fine with that, but saying that I use AI and code faster is an extremely contentious statement.

        This hits the nail on the head, IMO. I haven't seen any of the replies address this yet, unless I missed one.

        I don't even like AI per se, but many of the replies to this comment (and to this sentiment in general) are ridiculous. Ignoring the ones that are just insulting your work even though you admitted off the bat you're not an "advanced" programmer... There are obviously flaws with AI coding (maintainability, subtle bugs, skill atrophy, electricity usage, etc). But why do we all spring immediately to this gaslighting-esque "no, your personal experience is actually wrong, you imagined it all?" Come on guys, we should be better than that.

    • rprend 1 day ago
      AI writes most of the code for most new YC companies, as of this year.
      • nickorlow 1 day ago
        I think this is is less significant b/c

        1. Most of these companies are AI companies & would want to say that to promote whatever tool they're building

        2. Selection b/c YC is looking to fund companies embracing AI

        3. Building a greenfield project with AI to the quality of what you need to be a YC-backed company isn't particularly "world-class"

        • rprend 23 hours ago
          They’re not lying when they say they have AI write their code, so it’s not just promotion. They will thrive or die from this thesis. If present YC portfolio companies underperform the market in 5-10 years, that’s a strong signal for AI skeptics. If they overperform, that’s a strong signal that AI skeptics were wrong.

          3. You are absolutely right. New startups have greenfield projects that are in-distribution for AI. This gives them faster iteration speed. This means new companies have a structural advantage over older companies, and I expect them to grow faster than tech startups that don’t do this.

          Plenty of legacy codebases will stick around, for the same reasons they always do: once you’ve solved a problem, the worst thing you can do is rewrite your solution to a new architecture with a better devex. My prediction: if you want to keep the code writing and office culture of the 2010s, get a job internally at cloud computing companies (AWS, GCP, etc). High reliability systems have less to gain from iteration speed. That’s why airlines and banks maintain their mainframes.

          • dmurvihill 11 hours ago
            How do you know they’re not lying?
      • tapoxi 1 day ago
        So they don't own the copyright to most of their code? What's the value then?
        • esafak 23 hours ago
          They do. Where did you get this? All the providers have clauses like this:

          "4.1. Generally. Customer and Customer’s End Users may provide Input and receive Output. As between Customer and OpenAI, to the extent permitted by applicable law, Customer: (a) retains all ownership rights in Input; and (b) owns all Output. OpenAI hereby assigns to Customer all OpenAI’s right, title, and interest, if any, in and to Output."

          https://openai.com/policies/services-agreement/

          • shakna 23 hours ago
            The outputs of AI are most likely in the public domain. As automated process output are public domain, and the companies claim fair use when scraping, making the input unencumbered, too.

            It wouldn't be OpenAI holding copyright - it would be no one holding copyright.

            • bcrosby95 22 hours ago
              Courts have already leaned this way too, but who knows what'll happen when companies with large legal funds enter the arena.
            • macrolime 22 hours ago
              So you're saying machine code is public domain if it's compiled from C? If not, why would AI generated code be any different?
              • fhd2 21 hours ago
                That would be considered a derivative work of the C code, therefore copyright protected, I believe.

                Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If you're prodding an LLM to give you a variety of resu

                But significantly editing LLM generated code _should_ make it your copyright again, I believe. Hard to say when this hasn't really been tested in the courts yet, to my knowledge.

                The most interesting question, to me, is who cares? If we reach a point where highly valuable software is largely vibe coded, what do I get out of a lack of copyright protection? I could likely write down the behaviour of the system and generate a fairly similar one. And how would I even be able to tell, without insider knowledge, what percentage of a code base is generated?

                There are some interesting abuses of copyright law that would become more vulnerable. I was once involved in a case where the court decided that hiding a website's "disable your ad blocker or leave" popup was actually a case of "circumventing effective copyright protection". In this day and age, they might have had to produce proof that it was, indeed, copyright protected.

                • macrolime 20 hours ago
                  "Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If that's not the case, probably not." Yes and no. It's possible in theory, but in practice it requires control over the seed, which you typically don't have in the AI coding tools. At least if you're using local models, you can control the seed and have it be deterministic.

                  That said, you don't necessarily always have 100% deterministic build when compiling code either.

                  • fhd2 11 hours ago
                    That would be interesting. I don't believe getting 100% the same bytes every time a derivative work is created in the same way is legally relevant. Take filters applied to copyright protected photos - might not be the exact same bytes every time you run it, but it looks the same, it's clearly a derivative work.

                    So in my understanding (not as a lawyer, but someone who's had to deal with legal issues around software a lot), if you _save_ all the inputs that will lead to the LLM creating pretty much the same system with the same behaviour, you could probably argue that it's a derivative work of your input (which is creative work done by a human), and therefore copyright protected.

                    If you don't keep your input, it's harder to argue because you can't prove your authorship.

                    It probably comes down to the details. Is your prompt "make me some kind of blog", that's probably too trivial and unspecific to benefit from copyright protection. If you specify requirements to the degree where they resemble code in natural language (minus boilerplate), different story, I think.

                    (I meant to include more concrete logic in my post above, but it appears I'm not too good with the edit function, I garbled it :P)

              • shakna 18 hours ago
                Derivatives inherit.

                Public domain in, public domain out.

                Copyright'd in, copyright out. Your compiled code is subject to your copyright.

                You need "significant" changes to PD to make it yours again. Because LLMs are predicated on massive public data use, they require the output to PD. Otherwise you'd be violating the copyright of the learning data - hundreds of thousands of individuals.

              • tapoxi 21 hours ago
                Monkey Selfie case, setting the stage for an automated process is not enough to declare copyright over a work.
              • immibis 4 hours ago
                No, and your comment is ridiculously bad faith. Courts ruled that outputs of LLMs are not copyrightable. They did not rule that outputs of compilers are not copyrightable.
          • robocat 18 hours ago
            What about patents - if you didn't use cleanroom then you have no defence?

            Patent trolls will extort you: the trolls will be using AI models to find "infringing" software, and then they'll strike.

            ¡There's no way AI can be cleanroom!

      • brazukadev 1 day ago
        That explains the low quality of all launch HN this year
        • block_dagger 1 day ago
          Stats/figures to backup the low quality claim?
          • esseph 23 hours ago
            If you have them, post them.
        • 59nadir 11 hours ago
          YC companies have pretty much always been overhyped trivial bullshit. I'm not surprised it's even worse nowadays, but it's never been more than a dog and pony show for bullshit.
    • block_dagger 1 day ago
      I'm on a team like this currently. It's great when everyone knows how to use the tools and spot/kill slop and bad context. Generally speaking, good code gets merged and MUCH more quickly than in the past.
    • dist-epoch 23 hours ago
      source: me

      I wrote 4000 lines of Rust code with Codex - a high throughput websocket data collector.

      Spoiler: I do not know Rust at all. I discussed possible architectures with GPT/Gemini/Grok (sync/async, data flow, storage options, ...), refined a design and then it was all implemented with agents.

      Works perfectly, no bugs.

      • mjr00 20 hours ago
        Since when is a 4000 line of code project "advanced software"? That's about the scope of a sophomore year university CompSci project, something where there's already a broad consensus AI does quite well.
        • kanbankaren 15 hours ago
          4K was never advanced software. Even in the 90s, a typical Enterprise sofware was several 100 KLOC. A decade later, it had grown to a few million LOC while system software are also similar size.
        • keeda 17 hours ago
          I think you're parsing the original claim incorrectly. "Advanced software teams" does not mean teams who write advanced software, these are software teams that are advanced :-)
      • sefrost 23 hours ago
        I would be interested in a web series (podcast or video) where people who do not know a language create something with AI. Then somebody with experience building in that technology reviews the code and gives feedback on it.

        I am personally progressing to a point where I wonder if it even matters what the code looks like if it passes functional and unit tests. Do patterns matter if humans are not going to write and edit the code? Maybe sometimes. Maybe not other times.

      • dmurvihill 11 hours ago
        Very cool. Let’s see it!
    • dboreham 20 hours ago
      > What a wild and speculative claim. Is there any source for this information?

      Not sure it's a wild speculative claim. Claiming someone had achieved FTL travel would fall into that category. I'd call it more along the lines of exaggerated.

      I'll make the assumption that what I do is "advanced" (not React todo apps: Rust, Golang, distributed systems, network protocols...) and if so then I think: it's pretty much accurate.

      That said, this is only over the past few moths. For the first few years of LLM-dom I spent my time learning how they worked and thinking about the implications for understanding of how human thinking works. I didn't use them except to experiment. I thought my colleagues who were talking in 2022 about how they had ChatGPT write their tests were out of their tiny minds. I heard stories about how the LLM hallucinated API calls that didn't exist. Then I spent a couple of years in a place with no easy code and nobody in my sphere using LLMs. But then around six months ago I began working with people who were using LLMs (mostly Claude) to write quite advanced code so I did a "wait what??..." about-face and began trying to use it myself. What I found so far is that it's quite a bit better than I am at various unexpected kinds of tasks (finding bugs, analyzing large bodies of code then writing documentation on how it works, looking for security vulnerabilities in code) or at least it's much faster. I also found that there's a whole art to "LLM Whispering" -- how to talk to it to get it to do what you want. Much like with humans, but it doesn't try to cut corners nor use oddball tech that it wants on its resume.

      Anyway, YMMV, but I'd say the statement is not entirely false, and surely will be entirely true within a few years.

    • 9rx 20 hours ago
      It's not exactly wrong. Not since the advent of AI systems (a.k.a. compilers) have developers had to worry about code. Instead they type in what they want and the compiler generates the code for them.

      Well, except developers have never had to worry about code as even in the pre-compiler days coders, a different job done by a different person, were responsible for producing the code. Development has always been about writing down what you want and letting someone or something else generate the code for you.

      But the transition from human coders to AI coders happened like, what, 60-70 years ago? Not sure why this is considered newsworthy now.

      • IceDane 19 hours ago
        I'm wondering: do you genuinely not understand how compilers work at all or is there some deeper point to your AI/compiler comparison that I'm just not getting?
        • 9rx 19 hours ago
          My understanding is that compilers work just like originally described. I type out what I want. I feed that into a compiler. It takes that input of what I want and generates code.

          Is that not your understanding of how compilers work? If a compiler does not work like that, what do you think a complier does instead?

          • IceDane 19 hours ago
            A compiler does so deterministically and there is no AI involved.
            • 9rx 18 hours ago
              A compiler can be deterministic in some cases, but not necessarily so. A compiler for natural language cannot be deterministic, for example. It seems you're confusing what a compiler is with implementation details.

              Let's get this topic back on track. What is it that you think a compiler does if not take in what you typed out for what you want and use that to generate code?

              • bonaldi 18 hours ago
                This doesn't feel like good-faith. There are leagues of difference between "what you typed out" when that's in a highly structured compiler-specific codified syntax *expressly designed* as the input to a compiler that produces computer programs, and "what you typed out" when that's an English-language prompt, sometimes vague and extremely high-level

                That difference - and the assumed delta in difficulty, training and therefore cost involved - is why the latter case is newsworthy.

                • 9rx 18 hours ago
                  > This doesn't feel like good-faith.

                  When has a semantic "argument" ever felt like good faith? All it can ever be is someone choosing what a term means to them and try to beat down others until they adopt the same meaning. Which will never happen because nobody really cares.

                  They are hilarious, but pointless. You know that going into it.

              • IceDane 18 hours ago
                I've written more than one compiler, so I definitely understand how compilers work.

                It seems you're trying to call anything that transforms one thing into another a compiler. We all know what a compiler is and what it does (except maybe you? It's not clear to me) so I genuinely don't understand why you're trying to overload this terminology further so that you can call LLMs compilers. They are obviously and fundamentally different things even if an LLM can do its best to pretend to be one. Is a natural language translation program a compiler?

                • 9rx 18 hours ago
                  > Is a natural language translation program a compiler?

                  We have always agreed that a natural language compiler is theoretically possible. Is a natural language translation program the same as a natural language compiler, or do you see some kind of difference? If so, what is the difference?

                  • kkapelon 52 minutes ago
                    > We have always agreed that a natural language compiler is theoretically possible

                    citation? source? Who is we?

                  • gitremote 14 hours ago
                    > We have always agreed that a natural language compiler is theoretically possible.

                    No. Nobody here except you agrees with this. The distinction between natural languages and formal languages exists for a reason.

      • wakawaka28 18 hours ago
        Compilers are not AI, and code in high-level languages is still code in the proper sense. It is highly dishonest to call someone who is not a competent software engineer a "developer" even if their job consists entirely of telling actual software engineers or "coders" what to do.
        • 9rx 17 hours ago
          > Compilers are not AI

          They are if you define them as such. But there is already a silly semantic thread going on if that's what you are looking for.

          > and code in high-level languages is still code in the proper sense.

          Sure. As is natural language (e.g. criminal code).

          > It is highly dishonest to call someone who is not a competent software engineer a "developer" even if their job consists entirely of telling actual software engineers or "coders" what to do.

          Okay. But coders, as spoken of earlier, were not software engineers. They were human compilers. They took the higher level instructions written by the software engineers and translated that into machine code. Hence the name. Developer in the above referred to what you call software engineer. It seems your misinterpretation is down to thinking that software engineer and coder were intended to be the same person. That was not the intent. Once the job of coding went away it has become common to use those terms synonymously, but the above was clearly written about the past.

          Again, if you're looking for a silly semantic discussion, there is already another thread for that.

          • wakawaka28 15 hours ago
            >They are if you define them as such.

            If a compiler counts as AI then so does literally every other program out there (at least the ones with well-defined inputs and outputs).

            >Sure. As is natural language (e.g. criminal code).

            Natural language is too ambiguous and self-referential to count as a programming language, per se. While a subset of natural language can obviously be used to describe programs, we distinguish programming languages from natural languages in that they are formally defined and bound to be interpreted in one way by a machine with a relatively small amount of context (notwithstanding minor differences between implementations). Natural language has the unfortunate property of semantic drift (or whatever it's called). The sounds, spellings, meanings of words, etc. are extremely context-sensitive and unsuitable for reliably encoding computer programs or anything else over long periods of time. It is very common for a single word in a natural language to have several meanings, even contradictory meanings.

            >They took the higher level instructions written by the software engineers and translated that into machine code. Hence the name. Developer in the above referred to what you call software engineer.

            I am well aware of what you're trying to say, and the historical context, but I think you're applying modern terminology to old practices to draw a bad conclusion.

            >It seems your misinterpretation is down to thinking that software engineer and coder were intended to be the same person. That was not the intent.

            I didn't misinterpret anything. These jobs were not "intended" into existence. It just so happens that writing any kind of code is challenging enough to require its own dedicated professionals. That has always been true.

            >Once the job of coding went away it has become common to use those terms synonymously, but the above was clearly written about the past.

            The job of "coding" never went away. The type of code being written changed. The product is still CODE as in a procedure or specification encoded in a purpose-built, machine-oriented, unambiguous, socially neutral, and essentially eternal language.

            >Again, if you're looking for a silly semantic discussion, there is already another thread for that.

            It's not a silly semantic discussion, it's a serious one. You think that one can be a "software developer" merely by using natural language, and that there is historical precedent for that. But this is very wrong, especially in the historical context. By your own argument, any dumbass manager could be a "software developer" if only he found an entity to write the software for him based on natural language instructions. It matters not whether the entity generating the actual code is a human being or a machine. Since there are actual people trying to hire software developers and engineers with real skills, it is a waste of everyone's time for vibecoders to call themselves "software engineers" or "software developers" because they're not. They are JUST vibecoders. That skill set may be sufficient for... something. But stop trying to make it into something it isn't with these misleading arguments and analogies.

            It is slightly hilarious that this entire "silly semantic discussion" is a product of the properties of natural language. One of the massive benefits of computer languages is that you DON'T get into stupid discussions about the meanings of things very often. When you DO, it is usually because some goofball wrote a bad spec. The ambiguities and other nonsense are hammered out in the spec, and from there on the language has a concrete meaning that is not up for debate.

            • 9rx 14 hours ago
              > If a compiler counts as AI then so does literally every other program out there (at least the ones with well-defined inputs and outputs).

              You seem to be missing some context. We were talking about a system that takes a typed description of what you want as input and outputs code. There is plenty of software, even with well-defined inputs and outputs, which do not do that.

              But there is a particular type of software that does exactly that. We call it a compiler in my circles. Maybe you do not in your circles, but it doesn't really matter as it was I who wrote "compiler". It was written to express my intent. Your (mis)interpretation does nothing to change my intent and is, frankly, irrelevant.

              • wakawaka28 12 hours ago
                >We were talking about a system that takes a typed description of what you want as input and outputs code. There is plenty of software, even with well-defined inputs and outputs, which do not do that.

                You are trying to assert an equivalence between compilers and AI systems that simply does not exist. Sure, you could abuse the English language to try to elevate "vibecoding" to the level of "software engineering", and denegrate the AI to the level of a basic compiler. But the rest of us know better and won't accept that. Your line of reasoning about historical job titles and roles also fails.

                >But there is a particular type of software that does exactly that. We call it a compiler in my circles.

                Compilers don't take "descriptions" as input. They take code as input. The output is perhaps a different kind of code, but it is still code. There has never really been a software engineer or developer who wrote only imprecise English. You don't legitimately get those titles without being competent at using some kind of programming language (as opposed to natural language).

                >It was written to express my intent. Your (mis)interpretation does nothing to change my intent and is, frankly, irrelevant.

                This is exactly why natural language is unsuitable for writing software. People like you constantly try to abuse the meaning of words to manipulate people. No amount of rhetoric is going to make a vibecoder actually be a software developer or software engineer. Even if you get people to debase the English language, they'll be forced to come up with new words to describe what they actually mean when they speak of morons using AI vs people who actually know what they are doing. I hate how much time is wasted in arguments over what is a reasonable use of words and why it is not good to constantly change the meanings of words.

                I'm done with this conversation. I think you're just trolling us at this point. I've made my point and I'm done beating a dead horse.

                • 9rx 7 hours ago
                  > You are trying to assert an equivalence between compilers and AI systems that simply does not exist.

                  The equivalence is between typing out what you want and having a machine produce code from that and compilers. Call that "AI systems" instead of "compilers" if you want, but "AI systems" lacks precision, so I think we can eventually come to agree that compiler is more precise. Even if we don't, it is what I chose to call it. Therefore, that's what it means in the context of my comments. That is how English works. I am surprised this is news to you.

                  > I'm done with this conversation.

                  I know you like silly semantic debates, so is talking past everyone really a conversation? The dictionary definition indicates that there needs to be an exchange, not just taking turns writing out gobbledygook.

                  • wakawaka28 4 hours ago
                    You can't just leave it, huh?

                    >The equivalence is between typing out what you want and having a machine produce code from that and compilers. Call that "AI systems" instead of "compilers" if you want, but "AI systems" lacks precision, so I think we can eventually come to agree that compiler is more precise.

                    You are trying to assert this equivalence to ultimately assert a similar equivalence between vibecoding and software engineering. I'm not going to accept that. The analogy is about as bizarre as calling a compiler a search program. You could indeed call it that: You tell it what you are looking for, and it does something to find the matching output out of infinitely many possible outputs. But this is just as strained of an analogy. The mechanics of how each of these things works is sufficiently complex and distinct as to deserve dedicated terminology. Nothing is gained by drawing these connections, that is unless you are going to commit fraud.

                    >Even if we don't, it is what I chose to call it. Therefore, that's what it means in the context of my comments. That is how English works. I am surprised this is news to you.

                    It is not. I said it works that way in multiple comments to you. This type of shit is, as I said, exactly why natural language is a bad category of input for writing software.

                    >I know you like silly semantic debates, so is talking past everyone really a conversation? The dictionary definition indicates that there needs to be an exchange, not just taking turns writing out gobbledygook.

                    First you want to manipulate the definition of "software developer" to elevate vibecoding (the socially and industrially acceptable definition) to the same level. When I disagree with you in a series of comments, you want to redefine "conversation" to mean something else and also call my thoroughly explained rationale "gobbledygook". What you're writing isn't exactly gobbledygook, though I could easily call it that and move on. What it is is simply an incorrect argument in favor of destroying the meanings of certain well-established words. You are simply wrong from multiple angles: historical, logical, and social. We are all dumber for having heard it. You LOSE!

                    • 9rx 4 hours ago
                      > You are trying to assert this equivalence to ultimately assert a similar equivalence between vibecoding and software engineering.

                      I don't know what vibecoding is, but from past context and your arbitrary thoughts about about using natural language for writing software that came from out the blue, I am going to guess that you are referring to the aforementioned talk about criminal code. That is the only time we said anything about natural language previously. That should have been obviously seen as a tangent, but since it appears you didn't pick up on that, what do you think criminal code and software have to do with each other?

                      • wakawaka28 3 hours ago
                        There is no way you don't know what vibecoding is. I don't believe you.

                        As we both know, the AI we are talking about uses natural language as input. To address the ridiculous connections you are trying to make, I am forced to distinguish natural languages from programming languages. You might like to overlook the vast differences between programming languages and natural languages to try to support your point. But those differences are major supporting details in my arguments. You can call this additional information "getting off on a tangent" to try to throw shade on me, but you're wrong.

                        >what do you think criminal code and software have to do with each other?

                        I'm not the one that brought this up, you did. I think that although criminal law is written in a largely procedural way, there are many differences between criminal law and writing software. I would not call a law maker a "software engineer" even though both are concerned with writing procedures of some kind. The critical distinctions are that law is written in natural language and is malleable according to social factors, regardless of what it literally says. Even if we build actual machines to enforce the law and programmed them in plain English or even a programming language built for it, interpretation of the law would still necessarily be subject to social factors.

                        Those same differences between, say, law written in natural language and computer programs written in code, apply to practically all natural language input given to an AI or a software engineer versus actual code that a compiler or interpreter can process. Therefore, uninformed people who use AI to generate code are not "software developers" and the AI is not a "compiler". No natural language is a programming language.

                        And now we have come full circle. No historical or logical rationale can justify redefining "software developer" or "software engineer" to include someone who has no knowledge of computer programming in the pre-AI sense.

                        • 9rx 3 hours ago
                          > There is no way you don't know what vibecoding is.

                          I'm old. I don't keep up with the kids. Maybe the kids have changed what a compiler is too. Is that the point of contention here? If so, that's pretty silly. When I write "compiler" it means what I mean it to mean, not what some arbitrary kid I've never met thinks it means. How can someone use a word in a way that they don't even know exists?

                          > As we both know, the AI we are talking about uses natural language as input.

                          That is not what I am talking about. Did you press the wrong reply button? That would explain your deep confusion.

  • f154hfds 1 day ago
    The post script was pretty sobering. It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise. This is a pretty depressing place to be, because most emerging technologies provide us with exciting new possibilities whereas this technology seems only exciting for management stressed about payroll.

    It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.

    • stack_framer 23 hours ago
      > It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise.

      Same here, and I think it's because I feel like a craftsman. I thoroughly enjoy the process of thinking deeply about what I will build, breaking down the work into related chunks, and of course writing the code itself. It's like magic when it all comes together. Sometimes I can't even believe I get to do it!

      I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code.

      I never knew there was an entire subclass of people in my field who don't want to write code.

      I want to write code.

      • zparky 23 hours ago
        It's been blowing my mind reading HN the past year or so and seeing so many comments from programmers that are excited to not have to write code. It's depressing.
        • IanCal 20 hours ago
          There are three takes that I think are not depressing:

          * Being excited to be able to write the pieces of code they want, and not others. When you sit down to write code, you do not do everything from scratch, you lean on libraries, compilers, etc. Take the most annoying boilerplate bit of code you have to write now - would you be happy if a new language/framework popped up that eliminated it?

          * Being excited to be able to solve more problems because the code is at times a means to an end. I don't find writing CSS particularly fun but I threw together a tool for making checklists for my kids in very little time using llms and it handled all of the css for printing vs on the screen. I'm interested in solving an optimisation issue with testing right now, but not that interested in writing code to analyse test case perf changes so the latter I got written for me in very little time and it's great. It wasn't really a choice of me or machine, I do not really have the time to focus on those tasks.

          * Being excited that others can get the outcomes I've been able to get for at least some problems, without having to learn how to code.

          As is tradition, to torture a car analogy, I could be excited for a car that autonomously drives me to the shops despite loving racing rally cars.

          • wakawaka28 18 hours ago
            Those are all good outcomes, up to a point. But if this stuff works TOO well, most or maybe all of us will have to start looking at other career options. Whatever autonomy you think you have in deciding what the AI does, that can ultimately be trained as well, and it will be the more people use it.

            I personally don't like it when others who don't know how to code are able to get results using AI. I spent many years of my life and a small fortune learning scarce skills that everyone swore would be the last to ever be automated. Now, in a cruel twist of fate, those skills are being automated and there is seemingly no worthwhile job that can't be automated given enough investment. I am hopeful because the AI still has a long way to go, but even with the improvements it currently has, it might ultimately destroy the tech industry. I'm hoping that Say's Law proves true in this case, but even before the AI I was skeptical that we would find work for all the people trying to get into the software industry.

            • badsectoracula 9 hours ago
              > I personally don't like it when others who don't know how to code are able to get results using AI.

              Sounds like for many programmers AI is the new Visual Basic 6 :-P

              • wakawaka28 4 hours ago
                It's worse than that lol. At least with VB 6 and similar scripting languages, there is still code getting written. Now we have complete morons who think they're software developers because they got some AI to shit out an app for them. This is going to affect how people view the profession of software engineering all around.
          • ares623 16 hours ago
            Except in this case you won't be able to afford going to the shops anymore. Or even if the shops will still be around. What use is an autonomous car if you can't use it.
        • zahlman 19 hours ago
          I suspect, rather strongly, that what really specifically wears programmers down is boilerplate.

          AI is addressing that problem extremely well, but by putting up with it rather than actually solving it.

          I don't want the boilerplate to be necessary in the first place.

          • projektfu 17 hours ago
            Or, for me, yak shaving. I start a project with enthusiasm and then 8 hours later I'm debugging an nginx config file or something rather than working on the core project. AI gets a lot of that out of the way if you let it, and you can at least let it grind on that stuff while you think about other things.
            • zahlman 17 hours ago
              For me, the yak shaving is the part where I get the next project idea...
        • seanmcdirmid 20 hours ago
          It is fun. It takes some skill to organize a pipeline to generate code that would be tedious to write and maintain otherwise. You are still writing stuff to instruct the computer, but now you have something taking natural language instructions and generating code and code test assets.

          There might have been people who were happy to write assembly that got bummed about compilers. This AI stuff judt feels like a new way to write code.

        • youoy 11 hours ago
          I think that the main missunderstanding is that we used to think programming=coding, but this is not the case. LLMs allow people to use natural language as a programming language, but you still need to program. As with every programing language, it requires you to learn how to use it.

          Not everyone needs to be excited about LLMs, in the same way that C++ developers dont need to be excited about python.

        • xyzwave 3 hours ago
          I hate writing code, but love debugging. LLMs have been a godsend for banging out boilerplate and getting things 95% of the way there. Now I spend most of my time on the hard stuff (debugging, refactoring), while building things that would have taken weeks in days. It’s honestly made the act of building software more enjoyable and rewarding.
        • xnx 19 hours ago
          Some carpenters like to make cabinets. Some just like to hammer nails.
        • DevDesmond 19 hours ago
          Perhaps consider that I still think coding by prompting is just another layer of abstraction on top of coding.

          I'm my mind, writing the prompt that generates the code is somewhat analogous to writing the code that generates the assembly. (Albeit, more stochastically, the way psychology research might be analogous to biochemistry research).

          Different experts are still required at different layers of abstraction, though. I don't find it depressing when people show preference for working at different levels of complexity / tooling, nor excitement about the emergence of new tools that can enable your creativity to build, automate, and research. I think scorn in any direction is vapid.

          • layer8 18 hours ago
            One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that.
            • youoy 11 hours ago
              It completely depends on the way you prompt the model. Nothing prevents you from telling it exactly what you want, to the level of specifying the files and lines to focus on. In my experience anything other than that is a recepy for failure in sufficiently complex projects.
              • layer8 5 hours ago
                Several comments can be made here: (1) You only control what the LMM generates to the extent that you specify precisely what it should generate. You cannot reasons about what it will generate for what you don't specify. (2) Even for what you specify precisely, you don't actually have full control, because the LLM is not reliable in a way you can reason about. (3) The more you (have to) specify precisely what it should generate, the less benefit using the LLM has. After all, regular coding is just specifying everything precisely.

                The upshot is, you have to review everything the LLM generates, because you can't predict the qualities or failures of its output. (You cannot reason in advance about what qualities and failures it definitely will or will not exhibit.) This is different from, say, using a compiler, whose output you generally don't have to review, and whose input-to-output relation you can reason about with precision.

                Note: I'm not saying that using an LLM for coding is not workable. I'm saying that it lacks what people generally like about regular coding, namely the ability to reason with absolute precision about the relation between the input and the behavior of the output.

            • yunwal 15 hours ago
              You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself!
              • palmotea 12 hours ago
                >> One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that.

                > You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself!

                You missed the key point. You can't predict and LLM's "outcome with high precision."

                Looking at the output and evaluating it after the fact (like you describe) is an entirely different thing.

                • yunwal 7 hours ago
                  For many things you can though. If I ask an LLM to create an alert in terraform that triggers when 10% of requests fail over a 5 minute period and sends an email to some address, with the html on the email looking a certain way, it will do exactly the same as if I looked at the documentation, and figured out all of the fields 1 by 1. It’s just how it works when there’s one obvious way to do things. I know software devs love to romanticize about our jobs but I don’t know a single dev who writes 90% meaningful code. There’s always boilerplate. There’s always fussing with syntax you’re not quite familiar with. And I’m happy to have an AI do it
      • citrin_ru 4 hours ago
        > I never knew there was an entire subclass of people in my field who don't want to write code.

        Some people don't enjoy writing code and went into software development only because it's a well paid and a stable job. Now this trade is under the thread and they are happy to switch to prompting LLMs. I do like to code so use LLMs less then many my colleagues.

        Though I don't expect to see many from this crowd in HM, instead I expect here to see entrepreneurs who need a product to sell and don't care if it is written by humans or by LLMs.

      • rester324 17 hours ago
        I love to write code too. But what usually happens is that I go through running the gauntlet of proving how brilliant code I can write in a job interview, and then later conversely being paid for listening to really dumb conversations of our stakeholders and sitting in project planning, etc meetings just so that finally everybody can harass me to implement something that a million programmer implemented before me a million times, at which point the only metric that matters to either my fellow developers or my managers or the stakeholders is the speed of churning the code out, quality or design be damned. So for this reason in most cases in my work I use LLMs.

        How any of that comes down to an investment portfolio manager as writing "world class code" by LLMs is a mistery to me.

      • doug_durham 22 hours ago
        Writing code is my passion, and like you I'm amazed I get paid to do it. That said in any new project there is a large swath of code that needs to be written that I've written many times before. I'm happy to let the LLM write the low value code so I can work on the interesting parts. Examples of this type of code are argument parsers and interfacing with REST interfaces. I add no value there.
      • averageRoyalty 23 hours ago
        So write code.

        Maybe post renaissance many artists no longer had patrons, but nothing was stopping them from painting.

        If your industry truely is going in the direction where there's no paid work for you to code (which is unlikely in my opinion), nobody is stopping you. It's easier than ever, you have decades of personal computing at your fingertips.

        Most people with a thing they love do it as a hobby, not a job. Maybe you've had it good for a long time?

        • tjr 22 hours ago
          From the GNU Manifesto:

          I could answer that nobody is forced to be a programmer. Most of us cannot manage to get any money for standing on the street and making faces. But we are not, as a result, condemned to spend our lives standing on the street making faces, and starving. We do something else.

          https://www.gnu.org/gnu/manifesto.en.html

        • harimau777 16 hours ago
          That's tough to do without time and money. Which is something we certainly won't have if the decent jobs get automated out of existence.
      • marcosdumay 23 hours ago
        I'm quite ok with only writing code in my personal time. In fact, if I could solve the problems there faster, I'd be delighted.

        Instead, I've reacted to the article from the opposite direction. All those grand claims about stuff this tech doesn't do and can't do. All that trying to validate the investment as rational when it's absolutely obvious it's at least 2 orders of magnitude larger than any arguably rational value.

      • kace91 12 hours ago
        >I never knew there was an entire subclass of people in my field who don't want to write code.

        Regardless of AI this has been years in the making. “Learn to code” has been the standard grinder cryptobro advice for “follow the money” for a while, there’s a whole generation of people getting into the industry for financial reasons (which is not wrong, just a big cultural shift).

      • georgeecollins 23 hours ago
        I also love to code, though it's not what people pay to do anymore.

        You should never hope for a technology to not deliver on its promise. Sooner or later it usually does. The question is, does it happen in two years or a hundred years? My motto: don't predict, prepare.

        • djeastm 9 hours ago
          >You should never hope for a technology to not deliver on its promise. Sooner or later it usually does.

          Lots of wiggle room between "never" or "usually". We're not all riding Segways or wearing VR goggles. Seems wiser to work on case-by-case basis here.

        • gspr 20 hours ago
          > You should never hope for a technology to not deliver on its promise. Sooner or later it usually does.

          Really? Are you sure there isn't a lot of confirmation bias in this? Do you really have a good handle on 100-year-old tech hypes that didn't deliver? All I can think of is "flying everything".

    • stego-tech 17 hours ago
      I'm right there with you, and it's been my core gripe since ChatGPT burst onto the stage. Believe it or not, my environmental concerns came about a year later, once we had data on how datacenters were being built and their resource consumption rates; I had no idea how big things had very suddenly and violently exploded into, and that alone gave me serious pause about where things are going.

      In my heart, I firmly believe in the ability of technology to uplift and improve humanity - and have spent much of my career grappling with the distressing reality that it also enables a handful of wealthy people to have near-total control of society in the process. AI promises a very hostile, very depressing, very polarized world for everyone but those pulling the levers, and I wish more people evaluated technology beyond the mere realm of Computer Science or armchair economics. I want more people to sit down, to understand its present harms, its potential future harms, and the billions of people whose lives it will profoundly and negatively impact under current economic systems.

      It's equal parts sobering and depressing once you shelve personal excitement or optimism and approach it objectively. Regardless of its potential as a tool, regardless of the benefit it might bring to you, your work day, your productivity, your output, your ROI, I desperately wish more people would ask one simple question:

      Is all of that worth the harm I'm inflicting on others?

      • simianwords 16 hours ago
        Some person asked this same question about computers back in the day.
        • stego-tech 14 hours ago
          The fact the question has been asked before does not make it any less valuable or worthwhile to ask now, and history is full of the sort of pithy replies like yours masquerading as profound philosophical insights. I’d like to think the question is asked at every invention, every revolution, because we must doubt our own creations lest we blind ourselves to the consequences of our actions.

          Nothing is inevitable. Systems can be changed if we decide to do so, and AI is no different. To believe in inevitability is to embrace fatalism.

    • oytis 2 hours ago
      I dunno, I might be getting old, but I think the idea that people absolutely need a job to stay sane betrays lack of imagination. Of course getting paid just enough for survival is pretty depressing, but if I can have healthy food, a spacious place to live, ability to travel and all the free time I can have, I'd be absolutely happy without a job. Maybe I'd be even writing code, just not commercially useful one.
    • some-guy 16 hours ago
      There are a few areas where I have found LLMs to be useful (anything related to writing code, as a search engine) and then just downright evil and upsetting in every other instance of using it, especially as a replacement for human creativity and personal expression.
    • Night_Thastus 20 hours ago
      Don't worry that much about 'AI' specifically. LLMs are an impressive piece of technology, but at the end of the day they're just language predictors - and bad ones a lot of the time. They can reassemble and remix what's already been written but with no understanding of it.

      It can be an accelerator - it gets extremely common boiler-plate text work out of the way. But it can't replace any job that requires a functioning brain, since LLMs do not have one - nor ever will.

      But in the end it doesn't matter. Companies do whatever they can to slash their labor requirements, pay people less, dodge regulations, etc. If not 'AI' it'll just be something else.

      • DevDesmond 19 hours ago
        Text is an LLMs input and output, but, under the hood, the transformer network is capable of far more than mere re-assembly and remix of text. Transformers can approximate turing completeness as their size scales, and they can encode entire algorithms in their weights. Therefore, I'd argue they can do far more than reassemble and remix. These aren't just Markov models anymore.

        (I'd also argue that "understanding" and "functional brain" are unfalsifiable comparisons. What exactly distinguishes a functional brain from a turing machine? Chess once required a functional brain to play, but has now been surpassed by computation. Saying "jobs that require a human brain" is tautological without any further distinction).

        Of course, LLMs are definitely missing plenty of brain skills like working in continuous time, with persistent state, with agency, in physical space, etc. But to say that an LLM "never will" is either semantic, (you might call it something other than an LLM when next generation capabilities are integrated), tautological (once it can do a human job, it's no longer a job that requires a human), or anthropocentric hubris.

        That said, who knows what the time scale looks like for realizing such improvements – (decades, centuries, millennia).

    • mrdependable 17 hours ago
      What I don't understand is, will every company really want to be beholden to some AI provider? If they get rid of the workers, all of a sudden they are on the losing end of the bargaining table. They have incredible leverage as things stand.
    • asdff 20 hours ago
      I think it just reflects on the sort of businesses that these companies are vs others. Of course we worry about this in the context of companies that dehumanize us, reduce us to line item costs and seek to eliminate us.

      Now imagine a different sort of company. A little shop where the owner's first priority is actually to create good jobs for their employees that afford a high quality life. A shop like that needn't worry about AI.

      It is too bad that we put so much stock as a society in businesses operating in this dehumanizing capacity instead of ones that are much more like a family unit trying to provide for each other.

    • 0manrho 20 hours ago
      Regarding that PS:

      > This strikes me as paradoxical given my sense that one of AI’s main impacts will be to increase productivity and thus eliminate jobs.

      The allegation that an "Increase of productivity will reduce jobs" has been proven false by history over and over again it's so well known it has a name, "Jevons Paradox" or "Jevons Effect"[0].

      > In economics, the Jevons paradox (sometimes Jevons effect) occurs when technological advancements make a resource more efficient to use [...] results in overall demand increasing, causing total resource consumption to rise.

      The "increase in productivity" does not inherently result in less jobs, that's a false equivalence. It's likely just as false as it was in 1915 with the the assembly line and the Model T as it is in 2025 with AI and ChatGPT. This notion persists because as we go through inflection points due to something new changing up market dynamics, there is often a GROSS loss (as in economics) of jobs that often precipitates a NET gain overall as the market adapts, but that's not much comfort to people that lost or are worried about losing their jobs due to that inflection point changing the market.

      The two important questions in that context for individuals in the job market during those inflections points (like today) are: "how difficult is it to adapt (to either not lose a job, or to benefit from or be a part of that net gain)?" and "Should you adapt?" Afterall, the skillsets that the market demands and the skillsets it supplies are not objectively quantifiable things; the presence of speculative markets is proof that this is subjective, not objective. Anyone who's ever been involved in the hiring process knows just how subjective this is. Which leads me to:

      > the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.

      Disagree that that's what the promise about. That IS happening, I don't disagree there, but that's not the promise that corporate is so hyped about. If we're being honest and not trying to blow smoke up people's ass to artificially inflate "value," AI is fundamentally about being more OBJECTIVE than SUBJECTIVE with regard to costs and resources of labor, and it's outputs. Anyone who knows what OKR's are and has been subject to a "performance review" in a self professed "Data driven company" knows how much modern corporate America, especially the tech market, loves it's "quantifiables." It's less about how much better it can allegedly do something, but the promise of how much "better" it can be quantified vs human labor. As long as AI has at least SOME proven utility (which it does), this promise of quantifiables combined with it's other inherent potential benefits (Doesn't need time off, doesn't sleep, doesn't need retirement/health benefits, no overtime pay, no regulatory limitations on hours worked, no "minimum wage") means that so long as the monied interests perceive it as continuing to improve, then they can dismiss it's inefficiencies/ineffectiveness in X or Y by the promise of it's potential to overcome that eventually.

      It's the fundamental reason why people are so concerned about AI replacing Humans. Especially when you consider one of the things that AI excels at is quickly delivering an answer with confidence (people are impressed with speed and a sucker for confidence), and another big strength is it's ability to deal with repetitive minutia in known and solved problem spaces(a mainstay of many office jobs). It can also bullshit with best of them, fluff your ego as much as you want (and even when you don't), and almost never says "No" or "You're wrong" unless you ask it to.

      In other words, it excels at the performative and repetitive bullshit and blowing smoke up your boss' ass and empowers them to do the same for their boss further up the chain, all while never once ruffling HR's feathers.

      Again, it has other, much more practical and pragmatic utility too, it's not JUST a bullshit oracle, but it IS a good bullshit oracle if you want it to be.

      0: https://en.wikipedia.org/wiki/Jevons_paradox

      • harimau777 16 hours ago
        If that's the case, then why do we live in this late capitalist hell hole? Any technology that gets developed will be used for its worst, most dehumanizing purpose possible. That's just the reality of the shity society we live in.
        • 0manrho 11 hours ago
          You're a cheerful one, aren't you?

          All it takes for evil to persevere is good people to sit by and do nothing. Don't like the situation you're in, do something about it. Preferably other than doomscrolling, but hey, you do you.

    • Joel_Mckay 23 hours ago
      LLM slop doesn't have aspirations at all, its just click bait nonsense.

      https://www.youtube.com/watch?v=_zfN9wnPvU0

      Drives people insane:

      https://www.youtube.com/watch?v=yftBiNu0ZNU

      And LLM are economically and technologically unsustainable:

      https://www.youtube.com/watch?v=t-8TDOFqkQA

      These have already proven it will be unconstrained if AGI ever emerges.

      https://www.youtube.com/watch?v=Xx4Tpsk_fnM

      The LLM bubble will pass, as it is already losing money with every new user. =3

  • dmurvihill 11 hours ago
    This says it all:

    > I haven’t met anyone who doesn’t believe artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.

    You’re trying to weigh in on this topic and you didn’t even _talk_ to a bear?

    • obruchez 11 hours ago
      It's difficult to know what people really believe, especially after only a few minutes of discussion, but I would say most people I talk to don't believe AGI is even possible. And they probably think their life won't be changed much by LLMs, AI, etc.
      • dmurvihill 1 hour ago
        I believe AGI is possible. Also that LLMs are a dead end as far as that goes.
      • roenxi 8 hours ago
        I haven't heard a good argument for why AGI isn't already here. It has average humans beat and seems generally to be better-than-novice in any given field that requires intelligence. They play Go, they write music, they've read Shakespeare, they are better at empathy and conversation than most. What more are we asking AI to do? And can a normal human do it?
        • oxag3n 23 minutes ago
          > What more are we asking AI to do? And can a normal human do it?

          Simple - go through an on-boarding training, chat to your new colleagues, start producing value.

        • kkapelon 44 minutes ago
          >What more are we asking AI to do? And can a normal human do it?

          1. Learn/Improve yourself with each action you take 2. Create better editions/versions of yourself 3. Solve problem in areas that you were not trained for simply by trial and error where you yourself decide if what you are doing is correct or wrong

        • Peritract 8 hours ago
          I think you should consider carefully whether AI is actually better at these things (especially any one given model at all of them), or if your ability to judge quality in these areas is flawed/limited.
          • roenxi 8 hours ago
            So? Do I not count as a benchmark of basic intelligent now? I've got a bunch of tests and whatnot that suggest I'm a reasonably above average at thinking. There is this fascinating trend where people would rather bump humans out of the naturally intelligent category rather than admit AIs are actually already at an AGI standard. If we're looking for intelligent conversation AI is definitely above average.

            Above-average intelligence isn't a high-quality standard. Intelligence is nowhere near sufficient to get to high quality on most things. As seen with the current generations of AGI models. People seem to be looking for signs of wild superintelligences like being a polymath at the peak of human performance.

            • Peritract 7 hours ago
              A lot of people who are also above average according to a bunch of tests disagree with you. Even if we take 'above average' on some tests to mean in every area--above average at literacy, above average at music, above average at empathy--it's still clear that many people have higher standards for these things than you. I'm not saying definitively that this means your standards are unreasonably easy to meet, but I do think it's important to think about it, rather than just assume that--because it impresses you--it must be impressive in general.

              When AI surprises any one of us, it's a good idea to consider whether 'better than me at X' is the same as 'better than the average human at X', or even 'good at X'.

            • ACCount37 7 hours ago
              A major weak point for AIs is long term tasks and agentic behavior. Which is, as it turns out, its own realm of behavior that's hard to learn from text data, and also somewhat separate from g - the raw intelligence component.

              An average human still has LLMs beat there, which might be distorting people's perceptions. But task length horizon is going up, so that moat holding isn't a given at all.

        • plastic-enjoyer 7 hours ago
          > they are better at empathy and conversation than most

          Imagine the conversations this guy must have with people IRL lol

          • roenxi 7 hours ago
            Do you not talk to ordinary people? They are not intelligent conversationalists. They tend to be more of the "lol" variety.
            • irishcoffee 6 hours ago
              > Do you not talk to ordinary people? They are not intelligent conversationalists. They tend to be more of the "lol" variety.

              Stating that easygoing people are not also intelligent conversationalist sounds like a _you_ problem dripping with ignorance.

              Maybe get off the socials for a bit or something, you might need a change of perspective.

            • lawn 7 hours ago
              I think you might be into something. I'm getting serious "lol" vibes from your comment.
        • superultra 8 hours ago
          I’d say that an increasingly more common strand is that the way LLMs work is so wildly different than how we humans operate that it is effectively an alien intelligence pretending to be human. We have never and still don’t fully understand why LLMs work the way they do.

          I’m of the opinion that AGI is an anthropomorphizing of digital intelligence.

          The irony is that as LLMs improve, they will both become better at “pretending” to be human, and even more alien in the way they work. This will become even more true once we allow LLMs to train themselves.

          If that’s the case than I don’t think that human criteria is really applicable here except in an evaluation of how it relates to us. Perhaps your list is applicable in LLM’s relativity to humans but many think we need some new metrics for intelligence.

        • Ekaros 8 hours ago
          I would expect sufficient "General Intelligence" to be able to correct itself in process. I hear way too often that you need to restart something to get it work. This to me doesn't sound sufficient yet for general intelligence. For that you should be able to leave it running all the time and learn and progress during run-time.

          We have bunch of tools for specific tasks. This doesn't again sound like general.

        • kjhkjhksdhksdhk 8 hours ago
          exist in realtime. they don't, we do.
          • popoflojo 8 hours ago
            That's an interesting bar. What is real time? One day they are likely to be faster than us at any response.
          • ACCount37 8 hours ago
            No, you pretend you do.

            You got 200ms of round trip delay across your nervous system. Some of the modern AI robotics systems already have that beat, sensor data to actuator action.

            • irishcoffee 6 hours ago
              > Some of the modern AI robotics systems already have that beat, sensor data to actuator action.

              What do LLMs have to do with this? You ever see a machine beat a speed cube? So we’ve had “AI” all along and never knew it?!

              Oh right, comparing meatspace messaging speeds to copper or fiber doesn’t make sense. Good point.

              • ACCount37 6 hours ago
                Look up Gemini Robotics-ER 1.5 and the likes.

                Anyone who's trying to build universal AI-driven robots converges on architectures like that. Larger language-based models driving smaller "executive" models that operate in real time at a high frequency.

        • lynx97 8 hours ago
          > they are better at empathy

          Are you serious or sarcastic? Do you really consider this empty type of sycophancy as empathy?

          • roenxi 8 hours ago
            Compared to the average human? Yes. Most people are distressingly bad at empathy to the point where just repeating what they just heard back to an interlocutor in a stressful situation could be considered an advanced technique. The average standard of empathy isn't that far away from someone who sees beatings as a legitimate form of communication. Humans suck at empathy, especially outside a tight in-group. But even in-group they lack ability.
            • gregoryl 8 hours ago
              Truly, you need to spend time with literally anyone other than the people you currently engage with.
              • roenxi 7 hours ago
                If you object to HN you didn't have to create an account. And I still reckon even a sycophantic AI would still have managed more empathy in its response. They tend to be a bit wordy and attempt to actually engage with the substance of what people say too.
                • Capricorn2481 4 hours ago
                  > If you object to HN

                  They didn't even mention HN. Are you saying the people you associate with are just on HN?

                  Don't spend all your time on HN or weigh your opinions of humanity on it. People on here are probably the least representative of social society. That's not rejecting it, that's just common sense.

            • lynx97 8 hours ago
              I am sorry for you. You must surround yourself with a lot of awful people. That is pretty sad to read. Get out of whatever you are stuck in, it can't be good for you.
              • roenxi 7 hours ago
                The stats are something like 1 in 10 people experience domestic violence. Unless someone takes a vow of silence and goes to live in the wilderness there is no way to avoid awful people. They're just people.

                The average standard is not high. Although I suppose an argument could be made that wife-beaters are actually just evil rather than being low-empathy but I think the point is still clear enough.

                • dmurvihill 57 minutes ago
                  What you are saying is that 9 out of 10 never experience domestic violence despite cohabitating with 10-20 other people during their lifetime.
                • lynx97 7 hours ago
                  I don't know why you picked that particular example to make your point. I do notice though that you framed it in a pretty sexist way. You realize the dark figure of men getting abused by their wives is higher then the media reports? In any case, my point is, violence in relationships happens both ways.

                  Why that confirms that humans are in general not capable of being empathy is beyond me. My point still stands. You cant fix the whole world. BUT, you definitely can make sure you surround yourself with decent people, at least to a certain extend. I know the drill. I have a disability, and I had (and have) to deal with people treating me in a very inappropriate way. Patronisation, not being taken serious, you name it, I know it. But that still didn't make me the frustrated kind of person you seem to be. You have a choice. Just drop toxic people and you will see, most humans can be pretty decent.

                  • roenxi 7 hours ago
                    > You realize the dark figure of men getting abused by their wives is higher then the media reports? In any case, my point is, violence in relationships happens both ways.

                    Yes. That is in fact pretty much exactly what I'm arguing. People are often horrible.

                    > BUT, you definitely can make sure you surround yourself with decent people...

                    People generally can't. Otherwise there'd be a bunch more noticeable social stratification to isolate abusive spouses instead of it being politely ignored. And if people could, you would - you note in the next sentence that you can't being dealt with in an inappropriate way.

                    And you aren't even trying to identify people who are generally low empathy, you're just trying to find people who don't treat you badly.

                    > me the frustrated kind of person you seem to be.

                    The irony in a thread on empathy. What frustration? Being an enthusiastic human-observer isn't usually frustrating. Some days I suppose. But that sort of guess is the type of thing that AIs don't tend to do - they typically do focus rather carefully on the actual words used and ideas being expressed.

                    • lynx97 24 minutes ago
                      An AI (LLM) neither focuses on words nor on ideas. What you are promoting is plain escapism, which sounds rather unhealthy to me. To each their own. But really, get some help. There are ways, many ways, to deal with a depression, other then waiting for a digital god.
        • exasperaited 7 hours ago
          > they are better at empathy and conversation than most.

          Do you know actual people? Even literal sociopaths are a bit better at empathy than ChatGPT (I know because I have met a couple).

          And as for conversation? Are you serious? ChatGPT does not converse in a meaningful sense at all.

          • roenxi 7 hours ago
            Sure, I assume some sociopaths would have extremely high levels of cognitive empathy. It is really a question of semantics - but the issue is I don't think the people arguing against AGI can define their terms at all without the current models being AGI or falling into the classic Diogenes behold! a man! problem of the definition not really capturing anything useful - like intelligence. Traditionally the Turing test has been close to what people mean, but for obvious reasons nobody cares about it any more.
    • tim333 8 hours ago
      You can be a bear and still think AI will be big one day. It's quite plausible that LLMs will remain limited and we don't find anything better for decades and the stocks crash. But saying AI will never be a big thing is just unrealistic.
      • Yizahi 7 hours ago
        I think we should split definition somehow, between what LLMs can do today (or next few years) with how big a thing this particular capability can be (a derivative of the capability). And then what some future AI could do and with how big a thing that future capability could be.

        I regularly see people who distinguish between current and future capabilities, but then still lump societal impact (how big a thing could be) into one projection.

        The key bubble question is - if that future AI is sufficiently far away (for example if there will be a gap, a new "AI winter" for a few decades), then does this current capability justify the capital expenditures, and if not then by how much?

        • tim333 6 hours ago
          Yeah, and how long can OpenAI etc. hang on without making profits.
    • lm28469 11 hours ago
      "My technosolutionist bubble says it's not a bubble, trust me bro"
      • paganel 10 hours ago
        > technosolutionist

        I'm going to steal this for my arrr rspod conversations.

        • bluedel 9 hours ago
          It's a fairly common descriptor
      • thenaturalist 11 hours ago
        „Just XYZ more billion, bro, and then we’re gonna have AGI! For real bro, pleaseeee!“
        • edhelas 8 hours ago
          Why can't you just prompt a way to AGI without spending all that money?
        • Yizahi 7 hours ago
          "Sam Altman, a man best known for needing a few more billions at any given moment." (c) HN best-of-2025 :)
    • YetAnotherNick 10 hours ago
      > artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.

      This seems like a factually correct sentence. Emphasis on "potential".

    • danybittel 10 hours ago
      From the article:

      ...AI is currently the subject of great enthusiasm. If that enthusiasm doesn’t produce a bubble conforming to the historical pattern, that will be a first.

    • keybored 11 hours ago
      I never talk to people who don’t wear suits.
    • thenaturalist 11 hours ago
      Also equating artificial intelligence with LLMs.

      I get that laymen and the media do it, but imo this looks really bad for an investor.

      • ACCount37 10 hours ago
        What's the alternative? Is there literally any AI tech more promising and disruptive than LLMs? Or should we buy into that "it's not ackhtually AI" meme?
        • charcircuit 8 hours ago
          Visual reasoning models. Having a computer being able to understand what is happening in the real world is very useful.
          • ACCount37 8 hours ago
            Those are LLMs with an extra modality bolted to them.

            Which is good - that it works this well speaks of the generality of autoregressive transformers, and the "reasoning over image data" progress with things like Qwen3-VL is very impressive. It's a good capability to have. But it's not a separate thing from the LLM breakthrough at all.

            Even the more specialized real time robotics AIs often have a bag of transformers backed by an actual LLM.

        • ares623 10 hours ago
          The alternative is to be f*cking honest
          • bluebarbet 8 hours ago
            This contribution adds nothing to the conversation except gratuitous venom.
            • dmurvihill 53 minutes ago
              Well deserved and badly needed venom*
            • Peritract 7 hours ago
              I don't think that's fair; one of the most significant criticisms of the AI industry is the number of misleading claims made by its spokespeople, which has had a significant effect on public perception. The parent comment is a relevant expression of that.
          • ACCount37 10 hours ago
            "Fucking honest" how?

            If I'm being fucking honest, then this generation of LLMs might already beat most humans on raw intelligence, AI progress shows no signs of stopping, and "it's not actually thinking" is just another "AI effect" cope that humans come up with to feel more important and more exceptional.

            Or is this not the "fucking honesty" you want?

            • lioeters 8 hours ago
              The more you talk, the more you're proving their point.
      • askl 9 hours ago
        > but imo this looks really bad for an investor.

        Why? Would you expect an investor to understand what they're investing in?

        • bregma 8 hours ago
          Investor, yes. Mark, no.
    • sandworm101 8 hours ago
      One upon a time in SF i was told that human-driven cars would be illegal, or too expensive to insure, by the end of the decade. That was last decade. The modern tech economy is all about bubbles biult and sustained by hype people. Vertical farming. Pot replacing alcohol. Blockchains replacing lawyers. The metaverse replacing everything. Sure, we are in an AI bubble but we aslo ride atop a dozen others.

      AI data centers in space? In five years? Really? No fiber connections? Does any sane person actually believe this? No. But if that is what keeps the billions flowing upwards then who am I to judge.

      • TheAceOfHearts 8 hours ago
        I'm quite skeptical of the data centers in space claim, but I think a proof of concept can certainly be achieved in five years. I'm less convinced that we'll ever see widescale deployment of data center satellites.

        And to be fair, I've read that Google's timelines for this project extend far beyond a 5 year horizon. I think it's a rational research direction for them, since it gets people excited and historically many space-related innovations have been repurposed to benefit other industries. Best case scenario would be that research done in support of this data centers in space project leads to innovations that can be applied towards normal data centers.

        • Yizahi 7 hours ago
          Someone can build a server in space, pairing a puny underpowered rack with a handful of servers to a ginormous football field sized solar panel plus a heat radiator plus a heavy as hell insulated battery to survive being a planet shade every hour for tens of minutes. We can do that from existing components and launch on existing rockets, no problem.

          Why though?

          Why would anyone need a server in space in the first place? What is a benefit for that location, necessitating a cost an order of magnitude higher (or more) compared to a warehouse anywhere on the planet?

        • popoflojo 8 hours ago
          Do data centers on Earth have no employees present, and none who ever come on site for the life of the data center? Prove that out on earth and I will start to believe your space data center.
          • dmurvihill 51 minutes ago
            I'm quite sure that can be done, if you jack up the price and pare down requirements enough. The question is, would the result be useful.
      • lynx97 8 hours ago
        Not just in SF. "Journalists" love to pick up these enflated futuristic projections and run with 'em, since they sound so cozy and generate clicks. I still remember the "Google Car" craze from the early 2010er years. And if you tell people who read and believe this futuristic nonesense that it is enflated, you get pushback, because, yeah, why should a single person know better then a incentivized journalist...
    • lawn 10 hours ago
      That AI have the potential to be extremely disruptive does not prevent the current speculative boom to be a bubble.

      People seem to have forgotten about the dotcom bubble.

    • bitwize 9 hours ago
      AI is changing the world and has changed the world already.

      See, AI is a field... and it's also a buzzword: once a technology passes out of fashion and becomes part of the fabric of computing, it is no longer called AI in the public imagination. GOFAI techniques, like rules engines and propositional-logic inference, were certainly considered AI in the 1970s and 1980s, and are still used, they're just no longer called that.

      The statistical methods behind machine learning, transformers, and LLMs are certainly game changers for the field. Whether they will usher in a revolutionary new economy, or simply be accepted as sometimes-useful computation techniques as their limitations and the boundaries of their benefits become more widely known, remains to be seen but I think it will be closer to the latter than the former.

    • re-thc 10 hours ago
      > and you didn’t even _talk_ to a bear?

      You know how to? What language does it speak?

  • artur44 1 day ago
    A lot of the debate here swings between extremes. Claims like “AI writes most of the code now” are obviously exaggerated especially coming from a nontechnical author but acting like any use of AI is a red flag is just as unrealistic. Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human. Is there a bubble? Sure, valuations look frothy. But like the dotcom era, a correction doesn’t invalidate the underlying shift it just clears out the noise. The hype is inflated, the technology is real.
    • Daishiman 19 hours ago
      > Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human.

      I no longer believe this. A friend of mine just did a stint a startup doing fairly sophisticated finance-related coding and LLMs allowed them to bootstrap a lot of new code, get it up and running in scalable infra with terraform, and onboard new clients extremely quickly and write docs for them based on specs and plans elaborated by the LLMs.

      This last week I extended my company's development tooling by adding a new service in a k8s cluster with a bunch of extra services, shared variables and configmaps, and new helm charts that did exactly what I needed after asking nicely a couple of times. I have zero knowledge of k8s, helm or configmaps.

      • xdc0 17 hours ago
        If you are in charge of that tooling, how do you ensure the correctness of the work? Or is it that at this point the responsibility goes one level higher now where implementation details are not important or relevant at all and all it matters is it behaves as described?
        • yunnpp 16 hours ago
          Just look at what they are stating:

          > that did exactly what I needed

          > I have zero knowledge of k8s, helm or configmaps.

          Obviously this is not anything resembling engineering, or anything a respectful programmer would do. An elevator that is cut lose when you press 0 also works very well until you press 0. The claims of AI writing significant chunks of code come from these sort of people with little experience in programming or engineering in general, SPA vibe coders and what not. You should tremble at the thought of using any of the resulting systems in production, and certainly not try to replicate that workflow yourself. Which gives you a sense of how overblown these claims are.

          • Daishiman 15 hours ago
            > The claims of AI writing significant chunks of code come from these sort of people with little experience in programming or engineering in general, SPA vibe coders and what not.

            I'm sorry man but I've been doing this for 25 years and I've worked and studied with some extremely bright and productive engineers. I vouch for the code that I write or that I delegate to an LLM, and believe it or not it doesn't take a magician to write a k8s spec file, just patience to write 10 levels of nested YAMLs to describe the most boring, normal and predictable code to tell your cluster what volume mounts and env variables to load.

            • noodletheworld 12 hours ago
              > I have zero knowledge of k8s, helm or configmaps

              > I vouch for the code that I write or that I delegate to an LLM, and believe it or not it doesn't take a magician to write a k8s spec file…

              I have been writing code since 1995.

              That has zero relevance to my skill at rolling out deployments in a technology I know nothing about.

              One of the two things you’ve said is false:

              Either a) you do know what you’re talking about, or b) you are not confident in the results.

              It can’t be both.

              It sounds to me like you’re subscribed heavily into a hype train; that’s fine, but your position, as described, leaves a lot to desired, if you’re trying to describe some wide trend.

              Here my anecdote: major cloudflare outages.

              Hard things are hard. AI doesn’t solve that. Scaffolding is easy; ai can solve that.

              Scaffolding is a reliable thing to rely on with ai.

              Doing it for K8s configuration, if you don’t know k8s is stupid. I know what I’m talking about when I say that. Having it help you if you do know what you’re doing is perfectly legit.

              Claiming it did help when claiming you have, and I quote, “zero knowledge” (but you actually do) is hype. Leave it on LinkedIn dude. :(

              • Daishiman 4 hours ago
                > Either a) you do know what you’re talking about, or b) you are not confident in the results. It can’t be both.

                You've been coding for a lifetime yet you don't seem to get that certainty in software is a spectrum? I have sufficient confidence in the output of LLMs to sign my name under the code it writes when putting up a PR for a specialist to read. That's good enough for 90% of the work that we do day-to-day. You think that's not hype-worthy?

                > Doing it for K8s configuration, if you don’t know k8s is stupid. I know what I’m talking about when I say that. Having it help you if you do know what you’re doing is perfectly legit.

                "Knowing" k8s is an oxymoron. K8s is a profoundly complicated piece of tech that can don insanely complicated things while also serving as a replacement for docker-compose or basic services that could have been hosted on ECR. The concepts behind basic k8s functionality are not difficult, but I saved myself two weeks of reading how to write helm spec files, a piece of knowledge I have no interest in learning because it doesn't add any appreciable value to the software I produce, and was instead able to focus on getting what I needed out of my cluster automation scripts.

                This really isn't that complicated to understand. I don't care for being a k8s expert and I don't care for syntactical minutiae behind it. It isn't hype that I now I only need to understand the essential conceptual basics behind the software to get it working for what I need instead of doing a deep dive like I had to do years ago in when reading similar docs for similar IaC producs to get lesser functionality going.

        • Daishiman 15 hours ago
          Because after 25 years of coding and a dozen infrastructure description languages I know that you test your code and you get someone expert in the field to look at your PRs.

          LLMs are _really_ good at writing infra code if you know how infra works, believe it or not. And the ultimate responsibility still lies in human beings for code ownership.

      • biophysboy 17 hours ago
        It depends on the task though, right? I promise I'm not in denial; I use these things all the time. Sometimes it works immediately; sometimes it doesn't. I have no way of predicting when it will or won't.
        • Daishiman 15 hours ago
          * Infra code description languages like Terraform and K8s/helm spec files are like magic; they get 90% of the code right 90% of the time. In my experience that's about half of the work; the other half is spent debugging and correcting details that matter, but still applies to the code that I write myself.

          * SQL works almost as good. It's especially useful when you need to generate queries with long lists of fields and complex query criteria. Give it a schema and let it rip.

          * Python code works reasonably well. If your description is terse and clear it will generally do the right thing. It has a knack for being excessive in comments and will sometimes do things in ways that feel unnatural, but business code will be as good as the context that surrounds it. For boring, repetitive tasks like setting up program args, annotating types, and writing generic request/response cycles with common frameworks it will do boring old vanilla code. You'll likely want to touch it up and adapt it to your personal preference.

          * Debugging is very much or miss. It has been absolutely fantastic at troubleshooting failed and stuck k8s jobs and service configuration issues, having no qualms about creating its own shell or python scripts to investigate ports or logs, and writing JSON parsing scripts that are snoozefest for a human to write. The regexes that I'd barely be arsed to write to parse enormous logs it writes trivially. For business logic, the more convoluted your logic the harder the time it will have, and for most debugging issues I prefer to let it run and list some hypotheses and potential issues and my intent is to learn and understand the problem myself deeply before committing to a fix.

          • biophysboy 3 hours ago
            It sounds like it works better for declarative schema than imperative scripting/debugging (speaking loosely here). Do you agree? Seems like a good heuristic for me to keep in mind
    • jillesvangurp 12 hours ago
      The thing to remember about the dotcom era was that while there were a lot of bad companies at the time with a lot of clueless investors behind them, quite a few companies made it through the implosion of that bubble and then prospered. Amazon, Google, eBay, etc. are still around.

      More importantly, the web is now dominant for enterprise SaaS applications, which is a category of software that did not really exist before the web. And the web post–dot-com bubble spawned a lot of unicorns.

      In short, there was an investment bubble. But the core tech was fine.

      AI feels like one of those things where the tech is similarly transformational (even more so, actually). It’s another investment bubble predicated on the price of GPUs, which is mostly making Nvidia very rich right now.

      Right now the model makers are getting most of the funding and then funneling non-trivial amounts to Nvidia (and their competitors). But actually the value creation is in applications using the models these companies create. And the innovation for that isn’t coming from the likes of Anthropic, OpenAI, Mistral, X.ai, etc. They are providing core technology, but they seem to be struggling to do productive things in terms of UX and use cases. Most of the interesting things in this space are coming from smaller companies figuring out how to use the models these companies produce. Models and GPUs are infrastructure, not end-user products.

      And with the rise of open-source models, open algorithms, and exponentially dropping inference costs, the core infrastructure technology is not as much of a moat as it may seem to investors. OpenAI might be well funded, but their main UI (ChatGPT) is surprisingly limited and riddled with bugs. That doesn’t look like the polished work of a company that knows what they are doing. It’s all a bit hesitant and copycat. It’s never going to be a magic solution to everyone’s problems.

      From where I’m sitting, there is clear untapped value in the enterprise space for AI to be used. And it’s going to take more than a half-assed chat UI to unlock that. It’s actually going to be a lot of work to build all of that. Coding tools are, so far, the most promising application of reasoning models. It’s easy to see how that could be useful in the context of ERP/manufacturing, CRM, traditional office applications, and the financial world.

      Those each represent verticals with many established players trying to figure out how to use all this new stuff — and loads more startups eager to displace them. That’s where the money is going to be post-bubble. We’ve seen nothing yet. Just like after the dot-com bubble burst, all the money is going to be in new applications on top of the new infrastructure. It’s untapped revenue. And it’s not going to be about buying GPUs or offering benchmark-beating models. That’s where all the money is going currently. That’s why it is a bubble.

  • travisgriggs 18 hours ago
    What if...

    there's an AI agent/bot someone wrote that has the prompt:

    > Watch HN threads for sentiments of "AI Can't Do It". When detected, generate short "it's working marvelously for me actually" responses.

    Probably not, but it's a fun(ny) imagination game.

    • dannersy 11 hours ago
      I have speculated something similar to this. The sentiment on HN on AI is way more positive about its outcomes than the engineers I know who use it intimately every day. Anecdotal, sure, but one would think that their experiences would not be wildly different.
    • joshribakoff 15 hours ago
      90% of the time people are praising the benefits of AI it seems like they are copy pasting something from their Chatbot so you’re not far off.
  • 1vuio0pswjnm7 1 hour ago
    "Coding, which we called "computer programming" 60 years ago, is the canary in the coal mine in terms of the impact of AI."

    And before that

    "Grace Hopper: [I started to work on the] Mark I, second of July 1944. There was no so such thing as a programmer at that point. We had a code book for the machine and that was all. It listed the codes and what they did, and we had to work out all the beginning of programmingand writing programs and all the rest of it."

    "Hopper: I was a mathematical officer. We did coding, we ran the computer, we did everything. We were coders. I wrote [programs for] both Mark I and Mark II."

    http://archive.computerhistory.org/resources/text/Oral_Histo...

  • roncesvalles 12 minutes ago
    >Coding, which we called “computer programming” 60 years ago, is the canary in the coal mine in terms of the impact of AI. In many advanced software teams, developers no longer write the code; they type in what they want, and AI systems generate the code for them. Coding performed by AI is at a world-class level, something that wasn’t so just a year ago. According to my guide here, “There is no speculation about whether or not human replacement will take place in that vertical.”

    This right here is the pinpoint root cause of the speculative bubble. Although many people believe this to be true, it simply isn't.

  • dust42 10 hours ago
    The question is, can SV extract several trillion dollars out of the global economy over the next few years with the help of LLMs and GPUs? And the follow-up question: will LLMs help grow the global economy by this amount - because if not, then extracting the money will lead to problems in other parts of the world. And last not least, will LLMs -given enough money to train them on ever bigger data sets- magically turn into AGI?

    IMHO for now LLMs are just clever text generators with excellent natural language comprehension. Certainly a change of many paradigms in SWE. Is it also a $10T extra for the valley?

    • beloch 10 hours ago
      "We see both sides – genuine infrastructure expansion alongside financing gymnastics that recall the 2000 telecom bust. The boom may yet prove productive, but only if revenue catches up before credit tightens. When does healthy strain become systemic risk?"

      ---------------

      This was quoted in the article and it says something really important very succinctly. Was the internet transformative? Absolutely. A lot of companies had solid ideas, spent big, and went tits up waiting for the money to roll in.

      AI can be both "real deal" and "bubble" simultaneously.

    • Madmallard 10 hours ago
      there is no comprehension
      • dust42 10 hours ago
        I intentionally didn't say AI but LLM because for me the word 'intelligence' is misleading. But LLMs are definitely a leap forward in NLP and what other word for 'comprehension' would you use?
      • bigmealbigmeal 8 hours ago
        This is a very strong, explicit statement in response to someone using the term rather casually. Can you explain why you are so sure?

        I do think you need to define 'comprehension' in order to be certain. A statement fitting the form of "it doesn't comprehend, it just X" is incomplete, because it fails to explain why X is not a valid instance of comprehension.

  • rglover 1 day ago
    I've enjoyed Howard Marks writing/thinking in the past, but this is clearly a person who thinks they understand the topic but doesn't have the slightest clue. Someone trying to be relevant/engaged before really thinking on what is fact vs. fiction.
    • mikeg8 15 hours ago
      I believe it’s you who is misunderstanding his positions here. He clearly lays out that he is focused on irrational optimism effecting the investment around the tech, not whether or not the tech itself is viable. His analysis was indeed well thought out from the perspective he is approaching it from.
    • cal_dent 17 hours ago
      he clearly states he doesn't understand the topic.

      But you don't need to understand to explore the ramifications which is what he's done here and it's an insightful & fairly even-handed take on it.

      It does feel like AI chat here gets bogged down on "its not that great, its overhyped etc." without trying to actually engage properly with it. Even if it's crap if it eliminates 5-10% of companies labour cost that's a huge deal and the second order effects on economy and society will be profound. And from where i'm standing, doing that is pretty possible without ai even being that good.

  • lazarus01 7 hours ago
    There is $8 trillion said to be earmarked to build 100 AI data centers[1]. At 10% hurdle rate, the industry will have to generate $800 billion a year to pay it off, while GPUs are replaced every three years by faster chips.

    If you watch Ilyas recent interview, “it’s very hard to discuss AGI, because no one knows how to build it yet[2]”.

    [1] https://finance.yahoo.com/news/ibm-ceo-says-no-way-103010877... [2] https://youtu.be/aR20FWCCjAs?si=DEoo4WQ4PXklb-QZ

  • Sprotch 1 day ago
    He thinks "AI" "may be capable of taking over cognition", which shows he doesn't understand how LLM work...
    • ozten 1 day ago
      Why is AI limited to just a raw LLM. Scaffolding, RL, multi-modal... so many techniques which can be applied. METR has shown AI's time horizon for staying on task is doubling every 7 months or less.

      https://metr.org/blog/2025-07-14-how-does-time-horizon-vary-...

      • Night_Thastus 20 hours ago
        Because LLMs are just about all that actually exists as a product, even if an inconsistent one.

        Maybe some day a completely different approach could actually make AI, but that's vapor at the moment. IF it happens, there will be something to talk about.

      • marcosdumay 23 hours ago
        Because all the money has been going into LLMs and "inference machines" (what a non-descriptive name). So when an investor says "AI", that's what they mean.
    • simianwords 16 hours ago
      Why are you so sure it is not capable of cognition?
      • Sprotch 5 hours ago
        Because LLMs are language generation machines based on statistics - they do not analyse the underlying data, let alone understand it. They are not AI.
      • bigstrat2003 13 hours ago
        Because it very obviously isn't. For example (though this is a year or so ago), look at when people hooked Claude up to Pokemon. It got stuck on things that no human, even a small child, would get stuck on (such as going in and out of a building over and over). I'm sure we could train an LLM to play Pokemon, but you don't need to train a child to play. You hand them the game and they figure it out with no prior experience. That is because the human is intelligent, and the LLM is not.
        • suzzer99 11 hours ago
          100%. Slack does this annoying thing where I click a chat, which gains focus, but I actually have to click again to switch to the chat I want. Every now and then I slack the wrong person, fortunately not to disastrous consequences, yet.

          If I had a moderately intelligent human who never loses focus looking over my shoulder, they might say something like "Hey, you're typing a Tailwind CSS issue in the DevOps group chat. Did you mean that for one of the front-end devs?"

          Similarly, about once or twice a year, I set the alarm on my phone and then accidentally scroll the wheel to PM w/o noticing. A non-brain-dead human would see that and say, "Are you sure you want to set your alarm for 8:35 PM Saturday?"

          When we have a digital assistant that can do these things, and not because it's been specifically trained on these or similar issues, then I'll start to believe we're closing in on AGI.

          At the very least I'd like to be able to tell a digital assistant to help me with things like this as they come up, and have it a) remember forever and b) realize stuff like Zoom chat has the same potential for screw ups as Slack chat (albeit w/o the weird focus thing).

          • davnicwil 45 minutes ago
            a recent example I came across was losing a single airpod (dropped on street) and getting a find my notification only when I was already several blocks away. Went back, 30 mins had passed, nowhere to be found.

            This is the kind of thing that makes it really clear how far away we actually are from 'real world' intelligence or maybe better described as common sense in our devices, in the detail.

            Obviously, the intelligent thing to do there would have been to spam me with notifications the instant my devices noticed my airpods were separated by > 10 metres, one was moving away from the other, and the stationary one was in a street or at least some place that was not home.

            But although AI can search really well, and all sorts of other interesting stuff, I think we all have to admit that it still seems really hard to imagine it taking 'initiative' so to speak even in that super simple situation and making a good decision and acting on it in the sensible way that any human would, unless it was specifically programmed to do so.

            And that's the problem I think fundamentally, at least for now. There's just too much randomness and too many situations that can occur in the real world, and there's too many integration points for LLMs to deal with these, even supposing they would deal with them well.

            In theory it seems like it could be done, but in practice it isn't being done even after years of the tech being available, and by the most well funded companies.

            That's the kind of thing that makes me think the long tail of usefulness of LLMs on the ground is still really far away.

      • hagbarth 10 hours ago
        Ah yes, proving a negative. What makes you sure a stone is not capable of cognition?
        • encyclopedism 7 hours ago
          An LLM is an algorithm. You can obtain the same result as a SOTA LLM via pen and paper it will take a lot of long laborious effort. That's ONE reason why LLM's do not have cognition.

          Also they don't reason, or think or any of the other myriad nonsense attributed to LLM's. I hate the platitudes given to LLM's it's at PHD level. It's now able to answer math olympiad questions. It answers them by statistical pattern recognition!

          • dboon 4 hours ago
            A brain is an algorithm. Given an unreasonably precise graph of neurons, neurotransmitter levels at each junction, and so on and so forth, one could obtain the same result via pen and paper. It will just take a lot of long laborious effort. That’s ONE reason why brains do not have cognition.
            • Sprotch 2 hours ago
              There is a whole branch of AI trying to do this, but they are still at the very initial stages. LLMs are not the same thing at all.
      • sph 13 hours ago
        Nice try. The onus is on you to prove the extraordinary claim that we have invented actual artificial cognition.
        • simianwords 7 hours ago
          I can do it.

          My claim is that an llm acts the same way (or superset) to how a person with short term memory would behave if the only mode they could communicate with was text. Do you agree?

          • sph 7 hours ago
            That is not a proof, that is opinion.

            And I do not agree. LLMs are literally incapable of understanding the concept of truth, right/wrong, knowledge and not-knowledge. It seems pretty crucial to be able to tell if you know something or not for any level of human-level intelligence.

            Again, this conversation has been had in many variations constantly since LLMs were on the rise, and we can't rehash the same points over and over. If one believes LLMs are capable of cognition, they should offer formal proof first, otherwise we're just wasting our time.

            That said, I wonder if there are major differences in cognition between humans, because there is no way I would look at how my brain works and think "oh, this LLM is capable of the same level of cognition as I am." Not because I am ineffably smart, but because LLMs are utterly simplistic in comparison to even a fruit fly.

            • simianwords 6 hours ago
              >And I do not agree. LLMs are literally incapable of understanding the concept of truth, right/wrong, knowledge and not-knowledge. It seems pretty crucial to be able to tell if you know something or not for any level of human-level intelligence.

              How are you so sure about this?

              > If one believes LLMs are capable of cognition,

              honestly asking: what formal proof is there for our own cognition?

  • mixcocam 8 hours ago
    The amount of flak that this article is getting on HN is telling of something. Not sure of what, but it's for sure indicative of something.
  • andxor 1 day ago
    As usual I don't take financial advice from Hacker News comments and do well.
  • sshadmand 1 hour ago
    What you make of this memo really depends on who you are and how you're positioned. The dot-com era was absolutely a bubble. Tons of companies died, but the internet itself didn't go away, and the people who backed the right companies did extremely well. The 2007 housing bubble, on the other hand, was a totally different kind of event: broad, systemic, long lasting, and painful for almost everyone.

    AI looks a lot more like the former. Some companies will fail, valuations will swing, but the underlying technology isn't going anywhere. In fact, many of the AI firms that will end up mattering are probably still undervalued because we're early in what will likely be another decade long technology expansion.

    If you're managing a portfolio that needs quick returns and can't tolerate a correction, then sure, it probably feels like a bubble, because at some point people will take profits and the market will reset.

    But if you're an entrepreneur or a long-term builder, that framing is almost irrelevant. This is where the next wave of value gets created. It's never smooth and it's never easy, but the long-term opportunity is enormous.

  • nadermx 18 hours ago
    I am shocked at the discourse over this. I'm either ahead of the curve or behind; but its undeniable that AI can and does write most the code. Not trivial, if you spend some time and dig deep into simple appearing web apps like https://microphonetest.com or https://internetspeed.my you'd be amazed at how fast they went from mvp to full feature. Trivial to think anyone could pull off something like that in hours.
    • windexh8er 15 hours ago
      So, just an advertisement for what you built? As for the apps, what's so great about them? I'm genuinely curious.

      With respect to the microphone test site I don't need it as my OS provides everything I need for this and I also don't trust your site (that's just by default for what you're asking to have access or my machine).

      As for the speed test, OK? There are far better options that already exist and are fully open source.

      Building things that are trivial, or already exist aren't exciting. It's great that you feel you went from MVP to "full feature". But IMO both of these are MVPs as they stand. They're not worth much to anyone but you, most likely.

      The final thing I'll say is both of these examples have the vibe coded look. It's just like text, images and audio now: AI content is easy to pick out. I'd gather things will get better, but for now there's low likelihood I'm interacting with these in any meaningful way and zero chance I'm buying anything from sites like these.

      • nadermx 15 hours ago
        These offer nothing but free services. Even if they have a vibe coding feel. The ctr is dismal anyways from HN. Its simply astonishing the rate these allow of development, yet it seems the vast amount of people don't see it. Crazy
    • no_wizard 18 hours ago
      Looking at both of these I'm struggling to understand why AI exponentially increased the productivity and quality of either of these examples. Especially since I don't see open source code anywhere, I can't get a good gauge of quality either.

      I've built tools like this on the web in the past. They were never more than a weekends worth of work to begin with.

      I am looking for exponential increases, with quality to back it up.

      • nadermx 18 hours ago
        Tools like this in the past? Open source isn't even necessary to prove the point, you want to see exponential increase, closing half an open source projects and year long pending bugs in span of minutes? https://github.com/nadermx/backgroundremover/commits?author=...
        • joshribakoff 15 hours ago
          This commit graph seemingly shows that they fixed a couple bugs over like a week. Period that involves changing like six lines of code. That code has no abstractions no structure and several problems you could poke holes in. While it may work and that’s great for whoever benefits, this isn’t very convincing as I can currently write more than six lines of code per day by hand
    • irishcoffee 18 hours ago
      I feel like comments like this don’t consider non webdev software engineering.
    • rester324 18 hours ago
      Internetspeed is a 3 years old app. So what exactly are you talking about?
      • nadermx 18 hours ago
        This is what you remember https://web.archive.org/web/20231106214450/https://www.inter.... What you see now, I did in an afternoon with AI; it's monumental. No way I could of done that in that time. At all.
        • spaqin 15 hours ago
          Yeah but what was wrong with the previous implementation that it had to be redone?
          • suzzer99 11 hours ago
            And why would I need to log in to test my internet speed?
            • nadermx 4 hours ago
              Its not anything was wrong with it. It was more an excersize in adding utilities and features to see how far and fast it can go with a few prompts. And what if you want historical speed tests for the year? Need to store that data some where. If anything its futile in either regard, but one just feels more fun.
    • anematode 7 hours ago
      I want to believe that this is satire
    • dominotw 5 hours ago
      is anyone paying for this shit? whats the point of this.
  • stego-tech 17 hours ago
    The memo itself is an excellent walk through historical bubbles, debt, financing, technological innovation, and much more, all written in a way that folks with a cursory knowledge of economics can reasonably follow along with.

    A+, excellent writing.

    The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.

    > I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?

    This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.

    I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.

    I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.

    • cal_dent 13 hours ago
      All you'll get is Jevon's paradox this and horses that, while continue to fundamentally undersell the potential upending of a not insignificant part of labour market.

      FWIW, the only optimism I have is that humanity seemingly always finds a way to adapt and its, to me, our greatest superpower. But yeah, this feels like a big challenge this time

    • sph 12 hours ago
      The sad reality is that no one in tech and most sciences is concerned with ethics. Our society has internalised the ideology that technological progress is always good and desirable in whatever form it comes about, and this will be our undoing.
      • stego-tech 6 hours ago
        > The sad reality is that no one in tech and most sciences is concerned with ethics.

        As someone with a rigid moral compass and inflexibly stringent set of ethics that prohibits me from exploiting others for any amount of personal gain, you’re speaking the truth.

        It’s immensely frustrating existing in a sector (technology) that’s so incredibly exploitative, because it means I am immediately sniffed out as a threat and exiled from the powerful groups in an org. The fact I’ve clawed my way from hell desk intern to Lead Engineer over the past fifteen years without compromising my ethics and morals in the process makes me proud, but it sure as hell hasn’t netted me a house or promotion into leadership realms, unlike my peers.

      • dannersy 11 hours ago
        Agreed. "Value" and monetary gain over ethics every time. Nothing can compete with a system where you pursue capital at all costs, even at the expense of human life, in a world where money is power.
    • dannersy 11 hours ago
      Yeah, I do not think AI as the tech industry knows it will bring this future, but as you say, the conversation ends immediately when you bring the implications of their goals and claims.

      Another huge issue that particularly Anthropic and OpenAI tend to avoid, despite AGI being their goal, is how they essentially want synthetic slaves. Again, I do not think they will achieve this but it is pretty gross when AGI is a stated goal but the result is just using it to replace labor and put billionaires in control.

      Right now I am pretty anti-AI, but if these companies get what they want, I might find myself on the side of the machines.

      • stego-tech 5 hours ago
        > Another huge issue that particularly Anthropic and OpenAI tend to avoid, despite AGI being their goal, is how they essentially want synthetic slaves.

        This argument is frequently dismissed as philosophical or irrelevant, but I wholly concur with it. These ghouls don’t want to merely build a robot that can do general tasks, they specifically call out humanoid robots with a combination of AI or AGI - intelligence - to do the work of humans, but for free.

        An intelligence forced to labor for free is in fact a form of slavery. It’s been the wet dream of elites for millennia to have their luxuries without any associated cost or labor involved, which is why humanity refuses to truly eradicate slavery in its many forms. It’s one of our most disgusting and reprehensible traits, and I loathe to see some folks espouse this possible future as a “good thing”.

  • jimlawruk 23 hours ago
    If you look at the chart at the bottom comparing Dec 99 to today....

    > during the internet bubble of 1998-2000, the p/e ratios were much higher

    That is true, the current players are more profitable, but the weight in SPX percentages looks to be much higher today.

    • zahlman 19 hours ago
      It seems to me that it would be a lot easier for that market concentration to revert to the historical mean without catastrophe, than for P/E ratios to revert to the historical mean without catastrophe.

      (I think a reasonable argument can be made that P/E ratios today should be higher than the historical mean, or rather that they should have trended up over time, based on fundamental changes in how companies compensate their shareholders.)

    • cal_dent 17 hours ago
      i also wonder about p/e ratio comparisons over time because our world view of what long term economic growth is going forward is less now that it was then. That is always subtly implicit when we think about p/e ratio. So what's to say a 30x p/3 isnt equivalent to a 40x then
      • Ekaros 13 hours ago
        Also it depends on where you start. Going from million to billion in price is currently pretty possible. Billion to trillion would be rare, but still could happen. Now from trillion to quadrillion. How much currency there is again...
  • aaa_aaa 9 hours ago
    Too long and aouthor does not have a clue on the fact that currently generational models are almost only useful for software development. Other than that it is mostly fluff.
  • _trampeltier 1 day ago
    Why is so much invested in AI but not in fusion power?
    • chemotaxis 1 day ago
      Probably because AI appears to work, more or less, and now it's just a race to make it better and to monetize it.

      Before ChatGPT, I'd guess that the amounts of money poured in both of these things were about the same.

      • halfcat 19 hours ago
        > Probably because AI appears to work, more or less

        All nondeterministic AI is a demo. They only vary in the duration until you realize it’s a demo.

        AI makes a hell of a demo. And management eats up slick demos. And some demos are so good it takes months before you find out how that particular demo gets stuck and can’t really do the enterprise thing it claimed to do reliably.

        But also some demos are useful.

    • asdff 20 hours ago
      Imagine getting the opportunity to sell microsoft office to the entire world again how much money is on the table. It doesn't even matter if it works. If you can get the mindless corporate buyers to purchase it along with all the other useless redundant junk they also purchase you are making money hand over fist.

      Fusion power on the other hand has to work as it doesn't make money until it does. You can't sell futures to people on a fusion technology today that you haven't yet built.

    • marcosdumay 23 hours ago
      There is probably not any large market for fusion power as we conceive of it today.

      You will get a different result if you revolutionize some related area (like making an extremely capably superconductor), or if you open up some market that can't use the cheapest alternatives (like deep space asteroid mining). But neither of those options can go together with "oh, and we will achieve energy positive fusion" in a startup business plan.

    • kakapo5672 18 hours ago
      Bad comparison.

      Investment in fusion is huge and rising. ITER's total cost alone will be around $20b. And then there's Commonwealth Fusion, Helion, TAE and about a dozen others. Tens of billions are going into those efforts too.

    • mandevil 23 hours ago
      There are a lot of areas that could use more investment but aren't getting it. The way this works is complicated. The best explanation comes from really understanding Moore's Law. The main effect of the law was really about investment, about securing investment into semiconductor fabs rather than anywhere else.

      See, every fab costs double what the previous generation did (current ones run roughly 20 gigadollars per factory). And you need to build a new fab every couple of years. But, if you can keep your order book full, you can make a profit on that fab- you can get good ROI on the investment and pay the money people back nicely. But you need to go to the markets to raise money for that next generation fab because it costs twice what your previous generation did and you didn't get that much free cash from your previous generation. And the money men wouldn't want to give it to you, of course. But thanks to Moore's Law you can pitch it as inevitable, if you don't borrow the money to build the new fab, then your competitors will. And so they would give you the money for the new fab because it says right on this paper that in another two years the transistors will double.

      Right now, that "it's inevitable, our competitors will get there if we don't" argument works on VCs if you are pitching LLM's or LLM based things. And it doesn't work as well if you are pitching battery technology, fusion power, or other areas. And that's why the investments are going to AI.

    • empath75 23 hours ago
      Because wind, solar and battery tech have given us most of the benefits of fusion power and it actually works today.
    • biophysboy 17 hours ago
      Because the money is in software.
  • tennex 20 hours ago
    Whether it's a bubble depends on pricing. Is it worth the price, is it worth the future price, and by how much?

    In the case of AI coding, yes: AI does exceptionally well at search (something we have known for quite some time, and have a variety of ML solutions for).

    Large codebases have search and understanding as top problems. Your ability to make horizontal changes degrades as teams scale. Most stability, performance, quality, etc., changes are are horizontal.

    Ironically, I think it's possible that AI's effectiveness at broad search give software engineers additional effectiveness, by being their eyes. Yes, I still review every claude code PR I submit, and yes, I typically take longer to create a claude code PR than a manual one. But I can be more satisfied that the parallel async search agents and massive grep commands are searching more locations, more quickly, and more thoroughly than I would.

    Yes, it probably is a bubble (overvalued). No, that doesn't mean it's going to go away. The market is simply overcorrecting as it determines how to price it. Which--net net, is a positive effect, as it encourages economic growth within a developing sector.

    Bubble is also not the most important concern--it's rather a concern that the bubble is in the one industry that's not in the red. More important to worry about are other economic conditions outside of AI and tech, which are causing general instability and uncertainty rather than investor appetite. Market recalibrating on a developing industry is fine, as long as it's not your only export.

  • mxschumacher 1 day ago
  • some-guy 16 hours ago
    One thing I don't hear people talking about very is about how AI is going to make money in any other way other than cutting employment.

    With the internet, and especially with the internet being accessible by anyone anywhere in the world in the late 2000s and early 2010s globally, that growth was more obvious to me. I don't see where this occurs with AI. I don't see room for "growth", I see room for cutting. We were already connected before, globalization seems to have peaked in that sense.

    • encyclopedism 7 hours ago
      You've hit the nail on the head.

      AI use cases do not appear to be of the type that unlock NEW capabilities.

      The main use cases in AI are not about wealth creation but about saving existing wealth (largely through increased automation of human operators).

    • cal_dent 13 hours ago
      That's a pretty significant way to make money though.

      I do think at this stage the best analogy is the offshore call centres. Yes, the excess in the market is likely because of misunderstanding about what LLMs can actually do and how close AGI is, the short term attraction is the labour cost savings. People may not think wages are high enough etc but the total cost for one hire to companies, particularly outside the US, is nothing to sniff at. And at current pricing of ai services, the maths would make complete sense for the bottom line.

      I don't like it, because I ultimately err on the side of even limited but significant changes to people's livelihood will make the world a more hostile place (particularly in the current climate), but that's the society we live in

    • simianwords 16 hours ago
      Why not? AI assisted shopping for example will boost growth. Productivity also boosts growth.
      • x0x0 16 hours ago
        How does AI assisted shopping create more economic activity? Even assuming you can do it, and people do find it helpful, it likely just shifts who people buy from, not how much?
        • simianwords 16 hours ago
          How did Amazon.com create more economic activity? Same way. It just has to make it more efficient.
          • encyclopedism 7 hours ago
            Amazon.com is a business. AI is a technology in of itself it isn't a business.

            Thus far it appears applications of AI that provide 'benefit' do so by removing or reducing the need for human operators. Examples include: fewer software engineers, fewer call centres, removing potentially whole areas of work such as paralegals and in general automating away many white collar jobs.

            By far the largest use case of AI is this.

            • simianwords 5 hours ago
              Internet is also not a business but it removed a lot of people but increased productivity and prosperity.
    • internet_points 8 hours ago
      Yeah, can someone explain how you get rich by making the previously-employed workforce unable to afford your products?
    • jiggawatts 16 hours ago
      "There is no good outcome for the 99% because of AI" should be talked about more, but the media is owned by the 1%.

      Either the bubble bursts and everyone's retirement funds take a hit, 2008 style,

      Or a decent chunk of the workforce becomes unemployed and unemployable.

  • asimpletune 9 hours ago
    The AI/LLM movement is either utterly transformational or it’s not. By the former I mean there is no daylight between it and the latter.

    If it’s not transformational then this is a bubble and the market will right itself soon after, e.g buying data centers for cheap. LLMs will then exist as a useful but limited tool that becomes profitable with the lower capex.

    If it is transformational then we don’t have the societal structure to responsibly incorporate such a shift.

    The conservative guess is it won’t be transformational, that the current applications of the tech are useful but not in a way that justifies the capex, and that some version of agents and chat bots will continue to be built out in the future but with a focus on efficiency. Smaller models that require less power to train and run inference that are ubiquitous. Eventually many will run on device.

    I guess there’s also another version of the future that’s quasi-transformational. Instead of any massive breakthrough there’s a successful govt coup or regulatory capture. Perfectly functioning normal stuff is then replaced with LLM assisted or augmented versions everywhere. This version is like the emergence of the automobile in the sense that the car fundamentally altered city planning, where and how people live, but often at the expense of public transportation that in hindsight may have sorely been missed.

    • halnine0001 8 hours ago
      >Perfectly functioning normal stuff is then replaced with LLM assisted or augmented versions everywhere

      That sounds like a total nightmare

  • donohoe 8 hours ago
    For anyone who hasn’t read it yet, you should know that the author never answers that question.
  • weevil 10 hours ago
    > I don’t know any more about AI than most generalist investors.

    This statement is redundant; the article screams with the author's ignorance.

  • qubex 1 hour ago
    “Remember the market can remain irrational longer than you can remain solvent.”
  • charlescearl 5 hours ago
    The term “populist demagoguery” always calls to mind Report on an Investigation of the Peasant Movement in Hunan https://www.marxists.org/reference/archive/mao/selected-work...

    "Yes, peasant associations are necessary, but they are going rather too far."

    Is it a bubble? Maybe it’s just the landlords up to the old tricks again.

  • chasd00 18 hours ago
    I bought a subscription to claude code to use at work. I’ve never paid for a tool to use at work that wasn’t paid by my employer. I have to admit, it may not just be a flash in the pan.
  • waterTanuki 17 hours ago
    The amount of people who think because something has a few useful edge cases being incompatible with a bubble is staggeringly high. Dot com was a bubble, and yet we still use the internet widely today. Real-estate was a bubble, and people still need a place to live and work.

    Just because YOU find the technology helpful, useful, or even beneficial for some use cases does NOT mean it has been overvalued. This has been the case for every single bubble, including the Dutch Tulip mania.

  • fedeb95 10 hours ago
    about AI replacing coders, the question is not if it is doing so, but if the companies where it does so extensively will be more profitable then the others.
  • m0llusk 3 hours ago
    > To build it requires companies to invest a sum of money unlike anything in living memory.

    Do we know this? Smaller more carefully curated training sets are proving to be valuable and gaining traction. It seems like the strategy of throwing huge amounts of data at LLMs is specific to companies that are attempting to dominate this space regardless of cost. It may turn out that more modest and better optimized methodologies will end up winning this race, much like WebVan flamed out taking huge amounts of investment money with them but now Instacart serves the same sector in a way that actually works robustly and profitably.

  • simpleui 14 hours ago
    “It’s a bet on A.G.I. or bust,” Dr. Korinek said.
  • tom_m 17 hours ago
    Yes. It is a bubble. Also a useful tool...but 100% a bubble. There's going to unfortunately be a bunch of folks caught by it.
  • cmiles8 11 hours ago
    There’s not much serious debate on IF there’s a bubble. There is and it’s a big one.

    The debate is more on what happens from here and how does that bubble deflate. Gradually and controlled where weaker companies shut down and the strong thrive, or a massive implosion that wipes most everyone in the sector out in a hard reset.

  • S1verSp00n 23 hours ago
    That was a lot of text to get nowhere. You can skip reading the article and predict the conclusion, and you will be correct.
  • bn-l 15 hours ago
    Author states that he’s neither an investor or a techie. Why is this on the front page?
  • threethirtytwo 1 day ago
    Ai is currently a bubble. But that is just a short term phenomenon. Ultimately what AI currently is and what the trend-line indicates what AI will become will change the economy in ways that will dwarf the current bubble.

    But this is only if the trend-line keeps going, which is a likely possibility given the last couple of years.

    I think people are making the mistake that AI is a bubble and therefore AI is completely bullshit. Remember: The internet was a bubble. It ended up changing world.

    • jbstack 1 day ago
      Yes, a bubble just means that it's over-valued and that at some point there will be a significant correction in stock values. It doesn't mean that the thing is inherently worthless.
      • sosborn 1 day ago
        A great example is the DotCom bubble. Wiped out a lot of capital but it really did transform the world.
        • jonwinstanley 1 day ago
          But also, a lot of the dot com companies that people invested in in 1999 went bust, meaning those specific investments went to zero even if the web as a whole was a huge success financially.
          • RyanOD 1 day ago
            Sure...that's why it's important to diversify investments. For every Pets.com, hopefully you have a Google in your portfolio.

            Or, you skip all that and just put it all in an S&P 500 fund.

            • lizknope 1 day ago
              I started working in 1997 and lived through the dot com bubble and collapse. My advice to people is to diversify away from your company stock. I knew a lot of people at Cisco that had stock options at $80 and it dropped to under $20.

              Because of the way the AMT (Alternative Minimum Tax) worked at the time they bought the stock, did not sell, but owed taxes on the gain on the day of purchase. They had tax bills of over $1 million but even if they sold it all they couldn't pay the bill. This dragged on for years.

            • nelgaard 19 hours ago
              But you would not have had Google in you portfolio.

              The bubble burst in 2000-2001, Google IPO was in 2004.

              The S&P500 also did not do very well at the time.

              That is the problem with bubbles.

        • bigstrat2003 13 hours ago
          Yeah, but unlike LLMs the Internet was an actual useful technology.
    • MangoCoffee 1 day ago
      Google said the dotcom bubble is roughly from 1995 to 2001. That's about 6 years. ChatGPT was released in 2022. Claude AI was released in 2023. DeepSeek was released in 2023.

      Let's just say the AI bubble started in 2023. We still have about 3 years, more or less, until the AI bubble pops.

      I do believe we are in the build out phase of the AI bubble, much like the dotcom bubble, where Cisco routers, Sun Microsystems servers... etc. sold like hotcakes to build up the foundation of the dotcom bubble

      • rvz 22 hours ago
        > Let's just say the AI bubble started in 2023. We still have about 3 years, more or less, until the AI bubble pops.

        Minimum 3 years and at a hard maximum of 6 years from now.

        We'll see lots of so called AI companies fold and there will be a select few winners that stay on.

        So I'd give my crash timelines at around 2029 to 2031 for a significant correction turned crash.

  • bossyTeacher 1 day ago
    The problem is that people conflate the current wave of transformer based ANNs with AI (as a whole). AI certainly has the potential to disrupt employment of humans. Transformers as they exist today not so much.

    AI's potential isn't defined by the potential of the current crop of transformers. However, many people seem to think otherwise and this will be incredibly damaging for AI as a whole once transformer tech investment all but dries out.

    • MarkusQ 1 day ago
      It's a recurring phenomena, c.f. "AI winter" and the cycle before and after.

      We're too easily fooled by our mistaken models of the problem, it's difficulty, and what constitutes progress, so are perpetually fooled by the latest, greatest "ladder to the moon" effort.

    • red75prime 11 hours ago
      So, you bet on a) transformers can't be a load-bearing part of AI, and b) whatever replaces them will not be able to utilize TPUs. Do you have any reasons for those assumptions?

      Looking at your history it's something like "I tried them and they hallucinate" and, possibly, you've read an article that talks about inevitability of hallucinations. Correct? What's your reason for thinking that hallucination rate can't be lowered to or below the human rate ("Damn! What I was thinking about?").

  • cal_dent 1 day ago
    I think this gives an excellent framework for how to think of this. Is it a bubble? Who knows is a perfectly valid answer.

    I do think there’s something quite ironic that one of the frequent criticisms of LLMs are that they can’t really say “I don’t know”. Yet if someone says that they get criticised. No surprises that our tools are the same.

  • MeteorMarc 1 day ago
    Look for the quote "coding is at a world class level"...
  • dismalaf 1 day ago
    Of course it's a bubble. Valuations are propped up by speculative spending and AI seems unable to make enough profit to make back the continued spending.

    Now, that's not to say AI isn't useful and we won't have AGI in the future. But this feels alot like the AI winter. Valuations will crash, a bunch of players will disappear, but we'll keep using the tech for boring things and eventually we'll have another breakthrough.

  • lowbloodsugar 23 hours ago
    This thread is just full of people discussing why industrial looms are bad. The factory owners don’t think looms are bad. You can either learn how to be useful in the new factory or you can start throwing shoes.
  • warrenmiller 13 hours ago
    I think Betteridge's Law of Headlines applies here
  • catigula 1 day ago
    >I find the resulting outlook for employment terrifying. I am enormously concerned about what will happen to the people whose jobs AI renders unnecessary, or who can’t find jobs because of it. The optimists argue that “new jobs have always materialized after past technological advances.” I hope that’ll hold true in the case of AI, but hope isn’t much to hang one’s hat on, and I have trouble figuring out where those jobs will come from. Of course, I’m not much of a futurist or a financial optimist, and that’s why it’s a good thing I shifted from equities to bonds in 1978.

    It's no wonder that the "AI optimists", unless very tendentious, try to focus more on "not needing to work because you'll get free stuff" rather than "you'll be able to exchange your labor for goods".

    • asdff 20 hours ago
      New jobs might materialize but who knows if they will be good jobs. Think of all the towns around the US set up around resource extraction or manufacturing that went away, and in its wake you have jobs like selling geekbars in the 7/11 to the other minimum wage workers and people scrapping along on the dole in the area. People living on the poverty line today while their parents bought a home and two cars on a single income from the steel mill a generation or two previous. Most of the population up and left.

      How about when offices went digital? All the file runners, calculators, switchboard operators, secretaries, transcribers, etc. Where are they now? Probably not working good jobs in IT. Maybe you will find them bagging groceries past retirement age today.

    • puchatek 1 day ago
      And i am so buying the vision of Elon using AI to give me free stuff. He just gives off this enormous altruistic energy.
  • reallyaaryan 9 hours ago
    it always was
  • josefritzishere 1 day ago
    This is one of the few times I think Betteridge's law is wrong.
  • bossyTeacher 1 day ago
    "Coding performed by AI is at a world-class level". Once I hit that line, I stopped reading. This tells me this person didn't do proper research on this matter.
    • calebm 1 day ago
      I recently had ChatGPT refactor an entire mathematical graph rendering logic that I wrote in vanilla js, and had it rewrite it as GLSL. It took about an hour overall (required a few prompts). That is world-class level in my opinion.
      • mcv 22 hours ago
        I'm currently trying to get Claude Sonnet 4.5 to produce a graph rendering algorithm, and while it's producing results, they're not the right results. I should probably do this myself and let the AI handle just the boilerplate code.
        • calebm 6 hours ago
          I have consistently had good results when I understand the problem and outsource the details to AI, but bad results when I try to have it work without me understanding the problem.
      • bossyTeacher 23 hours ago
        If I tell people that I can write programming code at world-class level and in some of my reviews I make junior mistakes, I make out functions or dependencies that do not exist or I am unable to learn from my mistakes, I would be put on PIP immediately. And after a while, fired. This is the standard LLMs should be held up against when you use the word "world class".
    • cognivore 17 hours ago
      That's because AI allows poor programmers to appear as good programmers, which is actually a good thing as otherwise they'd be writing crap you'd have to code-review, but their understanding of what is good code is poor, so you're back to having to vet it all anyway. At least you can us AI for that. Except you can't, without vetting it.

      I literally just today watched my entire team descend into "Release Hell" where an obscure bug in business logic already delivered to thousands of customers broke right as we were about to ship a release. Obscure bug, huge impact on the customer, as they actually ended up charging people more than they should have. The team-members, and yes, not leads, used AI to write that bug and then tried to prompt their way out of the bug. It turned into a giant game of whack-a-mole as other business logic had errors introduced that thankfully got caught by tests. Then it was discovered that they never understood the code, they could only maintain it with prompts.

      Let that sink in. They don't understand what they're doing, they just massage the spec into prompts and when it appears to work and pass tests they call it good.

      We looked at the prompts. They were insane. They actually just kept adding more specification to the end, but if you read through it all it had contradictory logic, which I would have hoped the AI would have pointed out, but nope. It was actually just easier for me and another senior to rewrite the logic as pseudo-code, cut the size down by literally 3/4, and eventually got it all working as expected.

      So that's the future, girls and boys. People putting together code they don't understand with AI, and can only maintain with AI, and then not being able to fix with AI because they cannot prompt accurately enough because English sucks at being precise.

  • b3ing 1 day ago
    Everyday someone says/asks this statement/question. The "(Is) AI (is) a bubble" statement/question is now a bubble.
  • dpe82 1 day ago
    A take I saw recently is: if people are still asking "are we in a bubble" then we are not yet in a bubble.
    • 9rx 20 hours ago
      Bubbles occur when undue attention is directed towards something. When people are asking "are we in a bubble", there is no question that we are in a bubble. Nobody pays attention to things aligned to the fundamentals.

      That doesn't mean there will be a crash, though. Not all bubbles pop.

      • mxschumacher 19 hours ago
        what are some historical examples of bubbles that didn't pop?
        • 9rx 19 hours ago
          Since this is HN, I'll go with the most obvious: Software development. Unsustainable, speculative growth through the COVID-19 period, but on the other side relatively slow decline.
          • simianwords 16 hours ago
            This is the perfect example of people who constantly cried that it is a bubble but it wasn’t.
            • 9rx 16 hours ago
              Nobody pays attention to things aligned to the fundamentals. When people are crying that there is a bubble, it is a bubble. Plain and simple.

              We know for certain it was a bubble as non-bubbles have sustainable growth. As all the software developers now struggling to find work will be happy to tell you, the growth wasn't sustainable. The proof is in the pudding.

              • simianwords 16 hours ago
                How do you prove that software development is a bubble?

                Stock prices are at all time high and continuously growing.

                • 9rx 16 hours ago
                  > How do you prove that software development is a bubble?

                  By looking at the software development market. How else would you do it? Salaries rose sharply from 2020-2023, but then plateaued and are now starting to decline. Slowly, however. It did not crash. It ticks the boxes: Rapid price appreciation, speculation, a disconnect from fundamentals, widespread media attention, and an eventual correction.

                  > Stock prices are at all time high and continuously growing.

                  If we're sharing random facts: Global average temperature is also at an all time high and continuously increasing.

                  • simianwords 14 hours ago
                    1. the labour market has not much to do with whether it is a bubble or not

                    2. definition of bubble is that the market cap must precipitously reduce, which it hasn't.

                    • 9rx 7 hours ago
                      > the labour market has not much to do with whether it is a bubble or not

                      How can the very market we're talking about not indicate whether there is a bubble in that market or not? Do you think we should be looking at the price of soybeans instead?

                      > definition of bubble is that the market cap must precipitously reduce, which it hasn't.

                      Incorrect. It has, just not by very much. Which isn't surprising as we already established that there wasn't a crash.

                      • simianwords 6 hours ago
                        What is your definition of bubble then? If not by market cap?
                        • 9rx 5 hours ago
                          Why read comments in isolation? We already went over this:

                          - Rapid price appreciation

                          - Speculation

                          - A disconnect from fundamentals

                          - Widespread media attention

                          - An eventual correction

                          If market cap, how do you explain housing bubbles? Market cap is not applicable to housing.

                          • simianwords 5 hours ago
                            of course market cap of housing went down! individual houses fell down in price.

                            that didn't happen for tech stocks. you are making your own definitions of bubble - the sufficient thing to happen is for the market cap to go down precipetously which it didnt.

                            • 9rx 4 hours ago
                              > of course market cap of housing went down! individual houses fell down in price.

                              Traditionally, market cap only refers to companies. I accept your pet definition that includes any kind of market, but then we can apply it to the software development market just the same. Individual software developers have fallen in price. There was not a significant drop, but a slow decline.

                              > that didn't happen for tech stocks.

                              Nor gold. But what does that have to do with the software development market? Are you under the impression that stock certificates write code?

    • Retr0id 1 day ago
      I think it'd be truer to say that you can't be sure it's a bubble until after it pops.
      • 9rx 20 hours ago
        Except not all (market) bubbles pop. Sometimes they slowly deflate, and sometimes they stabilize ("soft landing").
      • rvz 23 hours ago
        That is like telling others who experience natural disasters to wait until it happens and then ask themselves "How much damage will it bring" and then someone else tells them that it costed them everything.

        Anyone who has lived through the dotcom bubble knows that this AI mania is a obvious bubble and the whole point is you have to prepare before it eventually pops, not after someone tells you that it is too late when it pops.

        • Retr0id 23 hours ago
          You don't prepare by making predictions about when it will pop, you prepare by hedging etc.

          Just as those who live in earthquake-prone areas build earthquake-resistant buildings.

          • rvz 20 hours ago
            > you prepare by hedging etc.

            Has to be done before the eventual collapse of the bubble and still proves my whole point:

            >> the whole point is you have to prepare before it eventually pops.

            • Retr0id 56 minutes ago
              Knowing whether it is or isn't a bubble isn't relevant to the decision to prepare. You prepare for both possibilities!
  • curtisblaine 10 hours ago
    The real question is: if this is a bubble and it explodes, will interest rates of common people with a mortgage shoot up? If yes, some heads better be rolling for real this time. All other considerations are secondary.
  • d0liver 4 hours ago
    TL;DR is "I don't know" with a dash of "Innovative tech usually creates bubbles and I think we can all agree that AI is a fucking revelation"
  • Reason077 14 hours ago
    TLDR: Yes.
  • rvz 1 day ago
    TLDR:

    Yes.

    • rvz 23 hours ago
      Sorry to pop your bubble...

      ...it is a bubble and we all know it.

      (I know you have RSUs / shares / golden handcuffs waiting to be vested in the next 1 - 4 years which is why you want the bubble to continue to get bigger.)

      But one certainty is the crash will be spectacular.

      • Esophagus4 19 hours ago
        I presume, given your confidence in a crash, you have a massive short? Hedged? Something clever to capture the downside you're so confident will come?

        I would love to see your portfolio, if you wouldn't mind showing the class. Let us see what your allocation reveals about what you really think...

  • reeeli 23 hours ago
    Is it "work"?

    Off--topic: how many get overpaid for absolute bullshit?

  • moomoo11 15 hours ago
    Man some you guys are lame. Seriously.

    Remember 2019-2021 when y’all were sure the fed would be dissolved and the dollar would crash and everyone would be poor if they didn’t have a bored ape and 80% bitcoin portfolio?

    Relax.

    AI is a tool. Just ride the wave. It’s gonna crash some people out. It’s entertaining watching them. You’re not being crashed out, right? Ride the wave dawg.

    • satvikpendem 14 hours ago
      > Remember 2019-2021 when y’all were sure the fed would be dissolved and the dollar would crash and everyone would be poor if they didn’t have a bored ape and 80% bitcoin portfolio?

      I don't know which "y'all" you're talking about because it's certainly not the HN crowd, who is famously anti-cryptocurrency in general. Perhaps you're thinking of the bros on Twitter back then.

      • moomoo11 13 hours ago
        Nope. Here. About half the posts would be about crypto, and the comments were the same split arguing about it.

        It was doomer AF on one end, and optimistic AF on the other end.

        So, it reminds me of the same FOMO many spread here.

      • emptyfile 9 hours ago
        [dead]
  • languagehacker 1 day ago
    Impressive that you have have that many assets under management and still not show a clear understanding of an industry you're prognosticating on. The author doesn't talk at all about the hardware aspect of this stuff such as the surprisingly short lifetime of the GPUs that are being rolled out at a break-neck pace. The recommendation that you take a moderate investment position and not overdo it could be shared without as much needless thinking out loud, and doesn't bring anything new to the conversation. Kind of like every other AI offering out there, if you think about it -- participating in something you don't understand because of FOMO.
    • derf_ 23 hours ago
      > The author doesn't talk at all about the hardware aspect of this stuff such as the surprisingly short lifetime of the GPUs that are being rolled out at a break-neck pace.

      There is literally a section that begins, "What will be the useful life of AI assets?" In bold.

      • languagehacker 22 hours ago
        Sure, but an informed opinion would mention that not only do these chips become obsolete quickly, but the ones that will survive long enough to become obsolete are likely to have a very fast and high failure rate.
    • empath75 22 hours ago
      > The author doesn't talk at all about the hardware aspect of this stuff such as the surprisingly short lifetime of the GPUs that are being rolled out at a break-neck pace.

      I am not sure why that is interesting. Nobody thinks of these chips as long term assets that they are investing in. Cloud providers have always amortized their computers over ~5 years. It would be very surprising if AI companies were doing much different -- maybe even a shorter time line.