> The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t.
For me, however, there is one issue: how can I utilize AI without degenerating my own abilities? I use AI sparingly because, to be honest, every time I use AI, I feel like I'm getting a little dumber.
I fear that excessive use of AI will lead to the loss of important skills on the one hand and create dependencies on the other.
Who benefits if we end up with a generation of software developers who can no longer program without AI? Programming is not just writing code, but a process of organizing, understanding, and analyzing.
What I want above all is AI that helps me become better at my job and continue to build skills and knowledge, rather than making me dependent on it.
> I use AI sparingly because, to be honest, every time I use AI, I feel like I'm getting a little dumber. I fear that excessive use of AI will lead to the loss of important skills on the one hand and create dependencies on the other. Who benefits if we end up with a generation of software developers who can no longer program without AI?
The shareholders benefit this quarter. Look man, I know you probably have a high opinion of yourself and all, but your job now is to degrade your abilities in order to deliver faster results now. The investors kindly demand that you get with the program, enthusiastically accept your new role as a depreciating asset (not human capital to be invested in), and stop thinking so much.
Do we think less because we use C++ vs assembly? Less because we use assembly over punching cards? Less because we use computers over pen and paper? And so on. You can put a strong local coding model on your local hardware today and no investor will be involved (unless you mean investors on the company you work for, but the truth is,n those were never in any way interested in how you build things, only that you do).
> Do we think less because we use C++ vs assembly? Less because we use assembly over punching cards? ...
Apologists love to make such analogies. "From 30,000 feet, doesn't the new things kinda look like some old thing? Then they're the same and you should accept the new thing!" But the analogies are never apt, and the "argument" is really only one of glossing over the differences.
The whole point of AI is for people to think less. It's basically the goddamned name. If people aren't thinking less, AI isn't doing its job. All of those things you listed are instances of mechanical translation, and aren't thinking..
> You can put a strong local coding model on your local hardware today and no investor will be involved (unless you mean investors on the company you work for, but the truth is,n those were never in any way interested in how you build things, only that you do).
Don't pretend you can cosplay a capitalist with AI. You need money, and if you can build something with a local model, the people with money can do it too, so they don't have to pay you. We work for a living.
Also it's a fantasy your local model with be anything but a dim candle to the ones the rich have. Real life is not a sci-fi novel.
Your employers are hoping to use you up making this current imperfect iteration of the technology work, because the hope is the next version won't need you. Don't be cheerful about it. It's a bad end for you.
You say that with such conviction "the whole point is to think less". Why do you think that? I think no less now that I use AI agents all day long, I just think about different things. I don't think about where I place certain bits of code, how certain structures look like. Instead I think about data models, systems, and what the ideal deliverable looks like and how we can plan its implementation and let it execute by the willing agent. I think about how I best automate flows so that I can parallelize work, within a harness that reduces the possibilities for mistakes. I think a whole lot more about different technologies and frameworks, as the cost of exploring and experimenting with them has come down tremendously.
Will what I do now be automated eventually or before long? Probably, we keep automating things, so one has to swim up the abstraction layers. Doesn't mean one has to think less.
>For me, however, there is one issue: how can I utilize AI without degenerating my own abilities?
My cynical view is you can't, and that's the point. How many times before have we seen the pattern of "company operates at staggering losses while eliminating competition or becoming entrenched in enough people's lives, and then clamps down to make massive profits"?
You can’t and that’s the new normal. We’re probably the only generation which was given an opportunity to get properly good at coding. No such luxury will be available in a few years optimistically; pessimistically it’s been taken away with GPT 5.2 and Opus 4.5.
If that's the case (and I'm not convinced it is), shouldn't retaining that skill be the priority for anyone who has already acquired it? I've yet to see any evidence AI can turn someone who can't code into a substitute for someone who can. If the supply of that skill is going to dry up, surely it will only become more valuable. If using AI erodes it, the logical thing would be not to use AI.
> If that's the case [...], shouldn't retaining that skill be the priority for anyone who has already acquired it?
Indeed I believe that, but in my experience these skills get more and more useless in the job market. In other words: retaining such (e.g. low-level coding) skills is an intensively practises hobby of such people that is (currently) of "no use" in the job market.
That's the correct diagnosis IMHO, but getting good as software engineering is ~3 years of serious studying and ~5-10 years of serious work and that's after you've learned to code, which is easier to some and more difficult to others.
Compare ROI of that to being able to get kinda the software you need in a few hours of prompting; it's a new paradigm, progress is (still) exponential and we don't know where exactly things will settle.
Experts will get scarce and very sought after, but once they start to retire in 10-20-30 years... either dark ages or AI overlords await us.
i think cs students should force themselves to learn the real thing and write the code themselves, at least for their assignments. i have seen that a lot of recent cs grads that has gpt in most of their cs life basically cannot write proper code, with or without ai.
They can't. Universities will eventually catch up to the demand of companies, just like how the one I attended switched from C/C++ to only managed languages.
With that the students were more directly a match for the in-demand roles, but reality is that other roles will see a reduction of supply.
The question here is: Will there be a need in the future for people who can actually code?
I think so. I also believe the field is evolving and that the pendulum always swings to extremes. Right now we are just beginning to see the consequences of the impact of AI on stability & maintainability of software. And we have not seen the impact of when it catastrophically goes wrong.
If you, together with your AI buddy, cannot solve the problem on this giant AI codebase, pulling in a colleague probably isn't going to help anymore.
The amount of code that is now being generated with AI (and accepted because it looks good enough) is causing long-term stability to suffer. What we are seeing is that AI is very eager to make the fixes without any regard towards past behavior or future behavior.
Of course, this is partially prevented by having better prompts, and human reviews. But this is not the future companies want us to go. They want us to prompt and move on.
AI will very eagerly create 10,000 pipes from a lake to 10,000 houses in need of water. And branch off of them. And again.
Until one day you realize the pipes have lead in them and you need to replace them.
Today this is already hard. With AI it's even harder because there is no unified implementation somewhere. It's all copy pasted for the sake of speed and shipping.
I have yet to see a Software Engineer who stands behind every line of code produced to be faster on net-new development using AI. In fact, most of the time they're slower because the AI doesn't know. And even when they use AI the outcome is worse because there is less learning. The kind of learning that eventually pushes the boundaries in 'how can we make things better'.
> how can I utilize AI without degenerating my own abilities?
Couldn't the same statement, to some extent, be applied to using a sorting lib instead of writing your own sorting algorithm? Or how about using a language like python instead of manually handling memory allocation and garbage collection in C?
> What I want above all is AI that helps me become better at my job and continue to build skills and knowledge
So far, on my experience, the quality of what AI outputs is directly related to the quality of the input. I've seen some AI projects made by junior devs that a incredibly messy and confusing architecture, despite they using the same language and LLM model that I use? The main difference? The AI work was based on the patterns and architecture that I designed thanks to my knowledge, which also happens to ensure that the AI will produce less buggy software.
I think their is a huge difference between using a library and using python instead of C/Rust etc. You use those because they are fundementally more efficient at the expense of having to worry about efficient memory use. Robust programming is a trade off and the speed of development might be worth it but it also could be so problematic that the project just never works. A sort library is an abstraction over sorting its extension to your language pool you now have the fundemental operator sort(A). Languages kind of transend the operator difference.
I think the problem the OP is trying to get at is that if we only program at the level of libs we lose the ability to build fundementally cooler/better things. Not everyone does that of course but AI is not generating fundementally new code its copy pasting. Copy Pasting has its limits especially for people in the long term. Copy paste coders don't build game engines. They don't write operating systems. These are esototeric to some people as how many people actually write those things! But there is a craftsmanship lost in converting more people to Copy Paste all be it with inteligence.
I personally lean on the side that this type of abstraction over thinking is problematic long term. There is a lot damage being done on people not necessiarly in Coding but in Reading/Writing especially in (9-12 grade + college). When we ask people to write essays and read things, AI totally short circuits the process but the truth is no one gets any value in the finished product of an essay about "Why columbus coming to the new world cause X,Y or Z". The value is from the process of thinking that used to be required to generate that essay. This is similar to the OPs worry. You can say well we can do both and think about it as we review AI outputs. But human's are lazy. We don't mull over the calculator thinking about how some value is computed something we take it and run. I think there is lot more value/thinking in the application of the calculated results so calculator didn't destroy mathematical thinking but the same is not necessiarly true in how AI is being applied. The fact of your observation of inn junior dev's output proves support to my view. We are short circuiting the thinking. If those juniors can learn the patterns than there is no issue but it's not guarenteed. I think the uncertainity is the the OPs worry but maybe restated in a better way.
no, it's more like asking a junior dev to write the sorting algorithm instead of writing it yourself. using a library would be like using an already verified and proven one algorithm. that's not what AI code provides.
This is part of the learning curve. When you vibe code you produce something that is as if someone else wrote it. It’s important to learn when that’s appropriate versus using it in a more limited way or not at all.
I know this is reductionist, and I believe that you are likely correct in your concerns, but this type of thing has been happening for thousands of years. Writing itself was controversial!
> They go on to discuss what is good or bad in writing. Socrates tells a brief legend, critically commenting on the gift of writing from the Egyptian god Theuth to King Thamus, who was to disperse Theuth's gifts to the people of Egypt. After Theuth remarks on his discovery of writing as a remedy for the memory, Thamus responds that its true effects are likely to be the opposite; it is a remedy for reminding, not remembering, he says, with the appearance but not the reality of wisdom. Future generations will hear much without being properly taught, and will appear wise but not be so, making them difficult to get along with.
My answer is: use AI exactly for the tasks that you as a tech lead on a project would be ok delegating to someone else. I.e. you still own the project and probably want to devote your attention to all of the aspects that you HAVE to be on to of, but there are probably a lot of tasks where you have a clear definition of the task and its boundaries, and you should be ok to delegate and then review.
This gets particularly tricky when the task requires a competency that you yourself lack. But here too the question is - would you be willing to delegate it to another human whom you don't fully trust (e.g. a contractor)? The answer for me is in many cases "yes, but I need to learn about this enough so that I can evaluate their work" - so that's what I do, I learn what I need to know at the level of the tech lead managing them, but not at the level of the expert implementing it.
If we sees ourselves less as a programmer and more as a software builder, then it doesn’t really matter if our programming skills atrophy in the process of adopting this tool, because it affords us to build at a higher abstraction level), kind of like how a PM does it. This up-leveling in abstractions have happened over and over in software engineering as our tooling improves over time. I’m sure some excellent software engineers here couldn’t write in assembly code to save their lives, but are wildly productive and respected for what they do - building excellent software.
That said, as long as there’s the potential for AI to hallucinate, we’ll always need to be vigilant - for that reason I would want to keep my programming skills sharp.
AI assisted software building by day, artisanal coder by night perhaps.
I think this question can be answered in so many ways - first of all, piling abstraction doesn’t automatically imply bloating - with proper compile time optimizations you can achieve zero cost abstractions, e.g C++ compilers.
Secondly, bloated comes in so many forms and they all have different reasons. Did you mean bloated as in huge dependency installs like those node modules? Or did you mean an electron app where a browser is bundled? Or perhaps you mean the insane number of FactoryFactoryFactoryBuilder classes that Java programmers have to bear with because of misguided overarchitecting? The 7 layer of network protocols - is that bloating?
These are human decisions - trade-offs between delivering values fast and performance. Foundational layers are usually built with care, and the right abstractions help with correctness and performance. At the app layers, requirements change more quickly and people are more accepting of performance hits, so they pick tech stacks that you would describe as bloated for faster iteration and delivery of value.
So even if I used abstraction as an analogy, I don’t think that automatically implies AI assisted coding will lead to more bloat. If anything it can help guide people to proper engineering principles and fit the code to the task at hand instead of overarchitecting. It’s still early days and we need to learn to work well with it so it can give us what we want.
You'd have to define bloat first. Is internationalization bloat? How about screen reader support for the blind? I mean, okay, Excel didn't need a whole flight simulator in it, but just because you doing don't you use a particular feature doesn't mean it's necessarily bloat. So first: define bloat.
Some termite mounds in Botswana already reach over two meters high, but these traditional engineering termites will be left behind in their careers if they don't start using AI and redefine themselves as mound builders.
Do you save time by using a calculator / spreadsheet or try to do all calculations in your head, because your ability to do quick calculations degrades the more you rely on tools to do it.
I'm not too worried about degrading abilities since my fundamentals are sound and if I get rusty due to lack of practice, I'm only a prompt away from asking my expert assistant to throw down some knowledge to bring me back up to speed.
Whilst my hands on programming has reduced, the variety of Software I create has increased. I used to avoid writing complex automation scripts in bash because I kept getting blocked trying to remember its archaic syntax, so I'd typically use bun/node for complex scripts, but with AI I've switched back to writing most of my scripts in bash (it's surprising at what's capable in bash), and have automated a lot more of my manual workflows since it's so easy to do.
I also avoided Python because the lack of typing and api discovery slowed me down a lot, but with AI autocomplete whenever I need to know how to do something I'll just write a method stub with comments and AI will complete it for me. I', now spending lots of time writing Python, to create AI Tools and Agents, ComfyUI Custom Nodes, Image and Audio Classifiers, PIL/ffmpeg transformations, etc. Things I'd never consider before AI.
I also don't worry about its effects as I view it as inevitable, with the pendulum having swung towards code now being dispensable/cheap to create, what's more important is velocity and being able to execute your ideas quickly, for me that's using AI where I can.
I haven't driven a car constantly for ages, but a few months ago when I went behind the wheel for a bit, pretty much everything came rushing back, and my primary issue was adjusting to driving on the left side of the road in a right-side drive car. This is how it is with skills; they never really go away, though it may feel like it. And so it is with programming skills (knowledge is a different thing, since the field is constantly changing).
You can always ask it to nudge you in the right direction instead of giving the solution right away. I suspect this way of using it is not very popular though.
This is not a new problem I think. How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?
If you're not careful and always rely on them as a crutch, they'll remain just that; without actually "incrementing" you.
I think this is a very good question. How should we actually be using our tools such that we're not degenerating, but growing instead?
As humans we have developed tools to ease our physical needs (we don’t need to run, walk or lift things) and now we have a tool that thinks and solve problems for us
> how can I utilize AI without degenerating my own abilities?
Personally I think my skill lies in solving the problem by designing and implementing the solution, but not how I code day-to-day. After you write the 100th getter/setter you're not really adding value, you're just performing a chore because of language/programming patterns.
Using AI and being productive with it is an ability and I can use my time more efficiently than if I were not to use it. I'm a systems engineer and have done some coding in various languages, can read pretty much anything, but am nowhere near mastery in any of the languages I like.
Setting up a project, setting up all the tools and boilerplate, writing the main() function, etc are all tasks that if you're not 100% into the language take some searching and time to fiddle. With AI it's a 2-line prompt.
Introducing plumbing for yet another feature is another chore: search for the right libraries/packages, add dependencies, learn to use the deps, create a bunch of files, sketch the structs/classes, sketch the methods, but not everything is perfectly clear yet, so the first iteration is "add a bunch of stuff, get a ton of compiler warnings, and then refine the resulting mess". With AI it's a small paragraph of text describing what I want and how I'd like it done, asking for a plan, and then simply saying "yes" if it makes sense. Then wait 5-15m. Meanwhile I'm free to look at what it's doing and if it's doing something stupid wrong, or think about the next logical step.
Normally the result for me has been 90% good, I may need to fix a couple things I don't like, but then syntax and warnings have already been worked out, so I can focus on actually reading, understanding and modifying the logic and catching actual logic issues. I don't need to spend 5+ days learning how to use an entire library, only to find out that the specific one I selected is missing feature X that I couldn't foresee using last week. That part takes now 10m and I don't have to do it myself, I just bring the finishing touches where AI cannot get to (yet?).
I've found that giving the tool (I personally love Copilot/Claude) all the context you have (e.g. .github/copilot-instructions.md) makes a ton of difference with the quality of the results.
Yeah, what about the degeneration of the skill of writing assembly? Or of punching cards, even? When compilers came onto the scene, there were similar objections. Instead of focusing on one skill atrophying, look at the new skills being developed. They may not be the ones you necessarily want to e good at, but it turns out developing social engineering skills to get an LLM to do something it's been prompted not to, might actually be a transferrable skill to real life.
What is value, even? A dollar bill is worth a dollar, but even that’s made up too. A crappy crayon drawing of stick people and a house is utterly priceless if your kid made it, worthless if it's some other kid. AI is forcing us to confront how squishy valuation is in the first place.
Prices are not fundamental truths. They’re numbers that happen to work. Ideally price > cost, but that’s not even reliably true once you factor in fixed costs, subsidies, taxes, rebates, etc. Boeing famously came out and said they couldn't figure out how much it actually cost to make a 747, back when they were still flying.
Here's a concrete example:
You have a factory with $50k/month in fixed costs. Running it costs $5 per widget in materials and labor. You make 5,000 widgets.
Originally you sell them for $20. Revenue $100k, costs $75k, pocket a cool $25k every month. Awesome.
Then, a competitor shows up and drives the price down to $10. Now revenue is $50k. On paper you “lose money” vs your original model.
But if you shut the factory down, you still eat the full $50k fixed cost and make $0. If you keep running, each widget covers its $5 marginal cost and contributes $5 toward fixed costs. You break even instead of losing $50k.
That’s the key mistake in "AI output is worth zero."
Zero marginal value does not imply zero economic value. The question is whether it covers marginal cost and contributes to something else you care about: fixed costs, distribution, lock-in, differentiation, complements, optionality.
We've faced this many times before so AI isn't special in this regard. It just makes the gap between marginal cost and perceived value impossible to ignore.
you on another comment in here lol you’re blinded by the hate! let it flow through you! insult the users of the bad neural nets until the world heals! don’t back down!
In my experience, people who bombard threads with insults based on the technology people use (a specific set of neural networks in this case) are…well, don’t have much better to do in life. you openly advocated for insulting and bullying people in your other comments. don’t back down with this “it’s just an observation” bs, own it! be you!
…or change your behavior and be a better person, whatever works
btw I’ve been doing “the AI” stuff since 2018 in industry and before in academia. I find your worldview displayed in your comments to be incredibly small and easily dismissible
You want to market to engineers, stick to provable statements. And address some of their concerns. With something other than "AI is evolving constantly, all your problems will be solved in 6 months, just keep paying us."
Oh by the way, what is the OP trying to sell with these FOMO tactics? Yet another ChatGPT frontend?
Perhaps in that case the critics should bend their ire on the marketing departments, rather than trashing the tech?
Really though, the potential in this tech is unknown at this point. The measures we have suggest there's no slowdown in progress, and it isn't unreasonable for any enthusiast or policy maker to speculate about where it could go, or how we might need to adjust our societies around it.
At least AI can write grammatically correct sentences and use punctuation and proper capitalization.
As a 0.1x low effort Hacker News user who can't lift a pinky to press a shift or punctuation key, you should consider using AI to improve the quality of your repetitive off-topic hostile vibe postings and performative opinions.
Or put down the phone and step away from the toilet.
And you just unwittingly proved my point, so I'm downgrading you to an 0.01x low effort Hacker News user.
If there are no other effects of AI than driving people like you out of the industry, then it's proven itself quite useful.
Edit: ok I will concede that point to you that I was mistaken about 0.01x, for candidly admitting (and continuously providing incontrovertible proof) that you're only a 0.001x low effort Hacker News user. I shouldn't have overestimated you, given all the evidence.
I'll take that, but don't see how it's so different from the intent I've always had of "automating myself out of the job". When I want to do "engineering", I can always spin up Factorio or Turing Complete. But for the rest of the time, I care about the result rather than the process. For example, before starting to implement a tool, I'll always first search online for whether there is already a good tool that would address my need, and if so, I'll generally utilize that.
You download a tool written by a human, you can reasonably expect that it does what the author claims it does. And more, you can reasonably expect that if it fails it will fail in the same way in the same conditions.
I wrote some Turing Machine programs back in my Philosophy of Computer Science class during the 80's, but since then my Turing Machine programming skills have atrophied, and I let LLMs write them for me now.
Is it possible you're not the target audience if you are aware that LLMs are impressive and useful? Regardless of the inane hype and bubble around them.
It does require excluding things yourself, but I've had a lot of success with uBlacklist. Some of the spam sites all have certain common characteristics that you can specifically search for to come up with a full search result of spam domains
That opening caricature was so off-putting and dismissive (and extremely wrong, that's not at all what the discourse is about generally speaking), I failed to conclude the reading session.
One does not need to embrace a tool to recognize its horrendous effects and side-effects. I can critique assault rifles without ever having handled one. I can critique street narcotics without taking drugs, and I can critique nuclear weapons without suffering from a blast personally. The idea that if you don't use a tool you can't form conclusions about why it's bad is devoid of any factual or historical grounding. It's an empty rhetorical device which can be used endlessly.
Literally 100% of the inventions which come out now or in the future can be handled in the exact same way. Somebody invents a robot arm that spoon feeds you so you never need to feed yourself with your own hand ever again? Oh this is revolutionary, everybody's going to install this in their home! What, you haven't? And you think it's a bad idea? Gosh, you're so backwards and foolish. The world is moving on, don't be left behind with your manual hand feeding techniques.
This article is like 1.5 years out of date. The discourse around genAI as a tech movement and its nearly uniformly terrible outcomes has moved on. OP hasn't. Seems the gap is widening between writers who are talking about these tools soberly and seriously, and writers who aren't.
But, to be fair, that wasn't the kind of critique it was talking about. If your critique guns is moral, strategic, etc, then yes, you can do it without actually trying out guns. If your critique is that guns physically don't work, don't actually do the thing they are claimed to do, then some hands-on testing would quickly dispel that notion.
The article is talking about those kinds of critiques, ones of the "AI doesn't work" variety, not "AI is harmful".
I don't know any engineers, any reports, or any public community voices who claim GenAI is bad because "AI doesn't work because I tried ChatGPT in 2022 and it was dumb." So it's a critique of a fictional movement which doesn't exist vs. an attempt at critiquing an actual movement.
"AI coding is so much better now that any skepticism from 6 months ago is invalid" has been the refrain for the last 3 years. After the first few cycles of checking it out and realizing that it's still not meeting your quality bar, it's pretty reasonable to dismiss the AI hype crowd.
It's gotten ok now. Just spent a day with Claude for the first time in a while. Demanded strict TDD and implemented one test at a time. Might have been faster, hard to say for sure. Result was good.
I think we have a real inflection point now. I try it a bit every year and was always underwhelmed. Halfway through this year was the first time it really impressed me. I now use Claude Code.
But Claude Code costs money. You really want to introduce a critical dependency into your workflow that will simultaneously atrophy your skills and charge you subscription fees?
It's also proprietary software running on someone else's machine. All other arguments for or against aside, I am surprised that so many people are okay with this. Not in a one-time use sense, necessarily, but to have long-term plans that this is what programming will be from here on out.
Another issue with it is IP protection. It reminded me stories where the moment physical manufacturing was outsourced to China, exact clones appeared shortly after.
Imagine investing tons of efforts and money into a startup, just to get a clone a week after launch, or worse - before your launch.
Right, we the workers are giving away control over the future of general purpose computation to the power elite, unless we reject the institutionalization of remote access proprietary tooling like this
A year ago I could get o1-mini to write tests some of the time that I would then need to fix. Now I can get Opus 4.5 to do fairly complicated refactors with no mistakes.
These tools are seriously starting to become actually useful, and I’m sorry but people aren’t lying when they say things have changed a lot over the last year.
It might even be true this time, but there is no real mystery why many aren't inclined to invest more time figuring it out for themselves every few months. No need for the author of the original article to reach for "they are protecting their fragile egos" style of explanation.
The productivity improvements speak for themselves. Over time, those who can use ai well and those who cannot will be rewarded or penalized by the free market accordingly.
If there’s evidence of productivity improvements through AI use, please provide more information. From what I’ve seen, the actual data shows that AI use slows developers down.
The sheer number of projects I've completed that I truly would never have been able to even make a dent in is evidence enough for me. I don't think research will convince you. You need to either watch someone do it, or experiment with it yourself. Get your hands dirty on an audacious project with Claude code.
It sounds like you're building a lot of prototypes or small projects, which yes LLMs can be amazingly helpful at. But that is very much not what many/most professional engineers spend their time on, and generalizing from that former case often doesn't hold up in my experience.
We use both Claude and Codex on a fairly large ~10-years old Java project (~1900 Java files, 180K lines of code). Both tools are able to implement changes across several files, refactor the code, add unit tests for the modified areas.
Sometime the result is not great, sometimes it requires manual updates, sometimes it just goes into a wrong direction and we just discard the proposal. The good thing is you can initiate such a large change, go get a coffee, and when you're back you can take a look at the changes.
Anyway, overall those tools are pretty useful already.
That's what it really all comes down to, isn't it?
It doesn't matter if you're using AI or not, just like it never mattered if you were using C or Java or Lisp, or using Emacs or Visual Studio, or using a debugger or printf's, or using Git or SVN or Rational ClearCase.
What really matters is in the end is, what you bring to market, and what your audience thinks of your product.
So use all the AI you want. Or don't use it. Or use it half the time. Or use it for the hard stuff, but not the easy stuff. Or use it for the easy stuff, but not the hard stuff. Whatever! You can succeed in the market with AI-generated product; you can fail in the market with AI-generated product. You can succeed in the market with human-generated product; you can fail in the market with human-generated product.
Meanwhile I'm getting a 5000 lines PR with code that's all clearly AI generated.
It's full of bloat; Unused http endpoints, lots of small utility functions that could have been inlined (but now come with unit tests!), missing translations, only somewhat correct design...
The quality wasn't perfect before, now it has taken a noticeable dip. And new code is being added faster than ever. There is no way to keep up.
I feel that I can either just give in and stop caring about quality, or I'll be fixing everyone else's AI code all of my time.
I'm sure that all my particular colleagues are just "holding it wrong", but this IS a real experience that I'm having, and it's been getting worse for a couple of years now.
I am also using AI myself, just in a much more controlled way, and I'm sure there's a sweet spot somewhere between "hand-coding" and vibing.
I just feel that as you inch in on that sweet spot, the advertised gains slowly wash away, and you are left with a tangible, but not as mindblowing improvement.
Agreed, I’m so exhausted while reviewing AI generated MRs.
In my line of work, I keep seeing it generate sloppy state machines with unreachable or superfluous states, bad floating-point arithmetic, and especially trying to do everything in the innermost loop of a nested iteration.
It also seems to love hallucinating Qt features that should exist but don’t, which I find mildly amusing.
Some of us actually enjoy writing code, and wish to preserve the skill, so we have no motivation to offload the task to an LLM. Do LLM coding evangelists also badger illustrators, asking why they don't just embrace machine learning image generation? Do they tell people they shouldn't have human friends because chatbots exist? Do they insist that musicians slough off their creativity like a molting snake, and just let a computer generate their songs?
There is also a big, uncomfortable truth regarding "AI" coding tools: They are trained on open-source code, yet they ignore the licenses attached to that code. If it's unethical for me to copy-and-paste MIT licensed code without including the license text, then it's unethical to let an LLM do it on my behalf.
LLMs are paving the way to a dystopia where there's no motivation for humans to create, and that world sounds miserable.
This story ends up being relevant in a metaphorical way.
My aunt was born in the 1940s, and was something of an old fashioned feminist. She didn't know why wasn't allowed to wear pants, or why she had to wait for the man to make the first move, etc. She tells a story about a man who ditched her at a dance once because she didn't know the "latest dance." Apparently in the 1950s, some idiot was always inventing a new dance that everyone _just had follow_. The young man was so embarrassed that he left her at the dance.
I still think about this story, and think about how awful it would have been to live in the 40s. There always has been social pressure and change, but the "everyone's got to learn new stupid dances all the time" sort of pressure feels especially awful.
This really reminds me of the last 10-20 years in technology. "Hey, some dumb assholes have built some new technology, and you don't really have the choice to ignore it. You either adopt it too, or are left behind."
The more things change, the more they stay the same.
Predicting the future, I can tell you with certainty that This Too Shall Pass.
Just don't ask me what 'This' is: A fad, or a sea-change?
My main Luddite objection to the current passion for LLM coding assistance is the new dependencies created/fostered within the software engineering diaspora. Not only are we dependent now on cloud access for so much of our SE now, but we'll also depend on the ongoing build-out, as LLM infrastructure bulldozes our real-world landscape, and we experience change from DRAM shortages to unwanted (NIMBY) construction of data centers. All based on the idea (yet again) that this is it.
The question becomes, is it (or will it be) worth it?
My personal prism of past experience does not lend itself to an easy answer. The move from assembly language to C was a no-brainer, but I remember resisting a transition from C to C++ ("I can do that in C with structs and function pointers") and the surge in OOP and COM and... the list goes on.
I remember objecting to Rational Rose and UML because I just didn't trust code generated algorithmically. Boilerplate with artifacts. I don't think I was wrong to hesitate there.
But I might be wrong now, to let others push the leading bleeding edge. Maybe it's time to get into it.
Can I just download a trained LLM and host it myself, without a dependency on internet/cloud corporate/overlord/rented-infrastructure?
I am willing to try, but I must declare that-- where I am, the Personal Computing revolution is not over. We still haven't won. And the rebellion: against auto-updates, telemetry, subscription models, any usage or dependency of/on your internet against your will. The fight for freedom goes on.
Can I get a Claude Code to live in my home with me, air-gapped and all mine?
As I see it, this is an inherent part of the tech industry. Unless you expressly choose to focus your career on maintaining legacy code, your value as a dev depends on your ability and willingness to continuously learn new tech.
> "The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t. The first group is shipping faster, taking on bigger challenges. The second group is… not."
Honest question for the engineers here. Have you seen this happening at your company? Are strong engineers falling behind when refusing to integrate AI into their workflow?
In my experience it's weak and inexperienced developers who gravitate to AI tools. Unfortunately they lack the domain knowledge to correctly evaluate the outcome of the AI tools they use. AI weaponizes them against their colleagues by enabling them to open more PRs and generate more text which may look reasonable at first but falls apart under serious review. Any gains I may get from AI myself is eaten away in this way.
As before, the big gap I still see is between engineers who set something up the right way and engineers who push code up without considering the bigger picture.
One nice change however is that you can guide the latter towards a total refactor during code review and it takes them a ~day instead of a ~week.
No. The opposite. The people who “move faster” are literally just producing tech debt that they get a quick high five for, then months later we limp along still dealing with it.
A guy will proudly deploy something he vibe coded, or “write the documentation” for some app that a contractor wrote, and then we get someone in the business telling us there’s a bug because it doesn’t do what the documentation says, and now I’m spending half a day in meetings to explain and now we have a project to overhaul the documentation (meaning we aren’t working on other things), all because someone spent 90 seconds to have AI generate “documentation” and gave themselves a pat on the back.
I look at what was produced and just lay my head down on the desk. It’s all crap. I just see a stream of things to fix, convention not followed, 20 extra libraries included when 2 would have done. Code not organized, where this new function should have gone in a different module, because where it is now creates tight coupling between two modules that were intentionally built to not be coupled before.
It’s a meme at this point to say, ”all code is tech debt”, but that’s all I’ve seen it produce: crap that I have to clean up, and it can produce it way faster than I can clean it up, so we literally have more tech debt and more non-working crap than we would have had if we just wrote it by hand.
We have a ton of internal apps that were working, then someone took a shortcut and 6 months later we’re still paying for the shortcut.
It’s not about moving faster today. It’s about keeping the ship pointed in the right direction. AI is a guy a guy on a jet ski doing backflips, telling is we’re falling behind because our cargo ship hasn’t adopted jet skis.
AI is a guy on his high horse, telling everyone how much faster they could go if they also had a horse. Except the horse takes a dump in the middle of the office and the whole office spends half their day shoveling crap because this one guy thinks he’s going faster.
This is exactly what I've seen. A perfect description. I am the tech lead for one part of a project. I review all PRs and don't let slop through, and there is a lot trying to get through. The other part of the project is getting worse by the day. Sometimes I peek into their PRs and feel a great sadness. There are daily issues and crashes. My repo has not had a bug in over a year.
> The gap is widening between engineers who’ve integrated these tools and engineers who haven’t.
This is kind of the fundamental disagreement in the whole discourse isn't it? If you could prove this is true, a lot of arguments stop making sense from the anti AI people, though not all of them. But nobody has proved this. And what is the gap? If the gap is in skill, the AI users are falling behind. If it's productivity, 1. Prove it, 2. is it more in my self interest to be highly productive or to be highly skilled?
Personally, I am already able to work 5 hours a week and convince my boss it's 40, with glowing performance reviews; I am just that productive. And I don't want to use AI. So if you lads can go ahead and gain 8x productivity and make me work a full job to compete, oh well, I should do that anyway.
I just don't find it interesting. The only thing less interesting is the constant evangelism about it.
I also find that the actual coding is important. The typing may not be the most ineresting bit, but it's one of the steps that helps refine the architecture I had in my head.
100% agree. My only super power is weaponized “trying to understand”, spending a Saturday night in an obsessive fever dream of trying to wrap my head around some random idea.
That happens to produce good code as a side effect. And a chat bot is perfect for this.
But my obsession is not with output. Every time I use AI agents, even if it does exactly what I wanted, it’s unsatisfying. It’s not sometning I’m ever going to obsess over in my spare time.
What worries me is how AI impacts neurodivergent programmers. I have ADHD and it simply doesn't work for me to constantly be switching context between the code I'm writing and the AI chat. I am terrified that I will be forced out of the industry if I can't keep up with people who are able to use AI.
Fellow diagnosed ADHD here. And I know every ADHD is different and people are different.
What helps me is:
- Prefer faster models like VSCode's Copilot Raptor Mini which, despite the name, is like 80% capable of what Sonnet 4.5 is. And is much faster. It is a fine tunned GPT 5 mini.
- Start writting the next prompt while LLMs work or keep pondering about the current problem at hand. This helps our chaotic brain to keep focused.
I find that any additional overhead caused by the separate AI chat is saved 20x over by basically never having to use a browser to look at documentation and S/O while coding.
That makes sense. I do use AI for questions like "what's the best way to flatten a list of lists in Python" or "what is the interface for this library function". I just don't use it the way I see some people do where they have it write the rough draft of their code or identify where a bug is.
I'm not an AI fanatic, but I do use ChatGPT often. In my experience, ChatGPT now is only marginally better than it was in 2022. The only real improvements is due to "thinking" abilities, i.e. searching the web and spending more tokens (basically prompting itself). The underlying model still to me feels largely the same.
I feel like I'm living in a different world when every time a new model comes out, everyone is in awe, and it scores exceptionally well on some benchmark that no one heard of before before the model even launched. And then when I use it, it feels like it's exactly the same as all models before, and makes the same stupid mistakes as always.
Let me know when AI can create functions for the secp256k1 library that adds a point in jacobian coordinates to another point in jacobian coordinates, both in variable time and in constant time. i.e. add functions
Sure, I get that most humans aren't programmers, but the thrust of the article here is defending the position that "AI Can Write Your Code. [It Can’t Do Your Job.]" However, this task is literally the sort of code that I write. So if AI cannot do the above task, then AI cannot (yet) write my code.
I don't know what other programmers are doing, but a lot of my time is spent on tasks like this.
Here's another random task: Write an analytic ray - cubic Bézier patch intersection routine based on the based on the "Ray Tracing Parametric Patches", SIGGRAPH 82 paper. This is a task I did as part of my final project for my undergraduate graphics class.
These are both straightforward tasks to take well-described existing algorithms from literature and implement them concretely. Very few design choices to consider. In theory it ought to be right up the alley for what AI is supposedly good for.
Write a blog post about with an intent to shame software engineers who dismiss AI, but never do it openly, pretending to be a direction for engineers who are on a wrong path. Propose a series of titles to make it even look worse for such engineers, but not openly offensive. Start with a caricature of a bad engineer, from first person POV, who doesn't want even try AI, with various reasons, but not disclosing real LLM problems. Then switch to a normal engineer, who admits above was an act, but again shows his colleagues statements about AI, like real ones, which would show how bad and lazy they are, how they don't adopt to new era, real examples, but again not going into real problems like brain rot etc. Make an inpression of a leading edge engineer who tries to save his colleagues, kind of agree with them about AI issues, but in the end show it's their attitude and not AI is the problem.
Here's the list of titles is proposed:
* The Engineer Who Refused to Look Up
* When Experience Becomes a Blindfold
* A Gentle Note to Engineers Who’ve Already Made Up Their Minds
* On Confident Opinions Formed a Little Too Early
* The Curious Case of Engineers Who Are Certain They’re Right
AI is a tool. As every other tool under the sun, it has strengths and weaknesses, it's our job, as software engineers to try it out and understand when/how to use it on our workflows, or if if fits our use cases at all.
If you disagree with the above statement, try replacing "AI" with "Docker", "Kubernetes", "Microservices architecture", "NoSQL", or any other tool/language/paradigm that was widely adopted in the software development industry until people realized it's awesome for some scenarios but not a be-all and end-all solution.
> If you’ve actually tried modern tools and they didn’t work for you, that’s a conversation worth having. But “I tried ChatGPT in 2022” isn’t that conversation.
How many people are actually saying this? Also how does one use modern coding tools in heavily regulated contexts, especially in Europe?
I can't disagree with the article and say that AI has gotten worse because it truly hasn't, but it still requires a lot of hand holding. This is especially true when you're 'not allowed' to send the full context of a specific task (like in health care). For now at least.
If you feel the need to hype up AI to this degree, you should provide some data proving that AI use actually increases productivity. This type of fact-free polemic isn’t interesting or useful.
My reasons for initially dismissing it is because to me it felt like it was taking the fun part of the job. We have all these tasks, and writing the code is this creative act, designed to be read by other humans. Just like how I don’t want AI to write music for me.
But I see where things are going. I tried some of the newer tooling over the past few weeks. They’re too useful to ignore now. It feels like we’re entering into an industrial age for software.
I don't have a beard, but if I did I'm sure it would be white, beyond grey.
It's okay. It's okay to feel annoyed, you have a tough battle ahead of you, you poor things.
I may be labelled a grey beard but at least I get to program computers. By the time you have a grey beard maybe you are only allowed to talk to them. If you are lucky and the billionares that own everything let you...
Sorry :) I couldn't resist. I think I'm the oldest person in the department and I think also that I am probably one of the ones that have been using AI in software development the most.
Don't be so quick to point at old people and make assumptions. Sometimes all those years actually translate into useful experience :)
Possibly. The focus of a lot of young people should be to try and effect political change that allows billionares wealth grow unended. AI is all going to accelerate this very rapidly now. Just look at what kind of world some of those with the most wealth are wanting to impose on the others now. It's frightening.
Just normal Luddite things, which attracts those most threatened in their personal identity by the new technology.
You see it obviously with the artists and image/video generators too.
We did this with Dadaism and Impressionism and photography before this too with art.
Ultimately, it's just more abstraction that we have to get used to -- art is stuff people create with their human expression.
It is funny to see everyone argue so vehemently without any interest in the same arguments that happened in the past.
Exit through the giftshop is a good movie that explores that topic too, though with near-plagiarized mass production, not LLMs, but I guess that's pretty similar too!
Since you're a real established artist, I want to make my point more clear: I am not an artist and while AI image tools let me make fun pictures and not be reliant on artists for projects, it doesn't imbue me with the creativity to create artistic works that _move_ people or comment on our society. AI doesn't give or take that from you, and I argue that is what truly separates art and artists from doodles and doodlers.
I mean, luddites have consistently been correct. Technological advancements have consistently been used to benefit the rich at the expense of regular people.
The early Industrial Revolution that the original Luddites objected to resulted in horrible working conditions and a power shift from artisans to factory workers.
Dadism was a reaction to WWI where the aristocracy's greed and petty squabbling led to 17 million deaths.
I don't disagree with that, just that there's anything that can be done about it. Which technology did we successfully roll back? Nukes are the closest I think you can get and those are very hard to make and still exist in abundance, we just somewhat controlled who can have them
Quite a few come to mind: chemical and biological weapons, beanie babies, NFTs, garbage pail kids... Some take real effort to eradicate, some die out when people get bored and move on.
Today's version of "AI," i.e. large language models for emitting code, is on the level of fast fashion. It's novel and surprising that you can get a shirt for $5, then you realize that it's made in a sweatshop, and it falls apart after a few washings. There will always be a market for low-quality clothes, but they aren't "disrupting non-nudity."
So are beanie babies, NFTs and garbage pail kids -- Things that have fallen out of fashion isn't the same thing as eradicating a technology. I think that's part of the difficulty, how could you roll back knowledge without some Khmer Rouge generational trauma?
I think about the original use of steam engines and the industrial revolution -- Steam engines were so inefficient, their use didn't make sense outside of pulling its own fuel out of the ground -- Many people said haha look how silly and inefficient this robot labor is. We can see how that all turned out.[2]
> Things that have fallen out of fashion isn't the same thing as eradicating a technology.
That's true. Ruby still exists, for example, though it's sitting down below COBOL on the Tiobe index. There's probably a community trading garbage pail kids on Facebook Marketplace as well. Ideas rarely die completely.
Burning fossil fuels to turn heat into kinetic energy is genuinely better than using draft animals or human slaves. Creating worse code (or worse clothing) for less money is a tradeoff that only works for some situations.
> Malcolm L. Thomas argued in his 1970 history The Luddites that machine-breaking was one of the very few tactics that workers could use to increase pressure on employers, undermine lower-paid competing workers, and create solidarity among workers. "These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made." [emph. added] Historian Eric Hobsbawm has called their machine wrecking "collective bargaining by riot", which had been a tactic used in Britain since the Restoration because manufactories were scattered throughout the country, and that made it impractical to hold large-scale strikes. An agricultural variant of Luddism occurred during the widespread Swing Riots of 1830 in southern and eastern England, centring on breaking threshing machines.
Luddites were closer to “class struggle by other means” than “identity politics.”
As one of those on the skeptical side, one train of thought I have not seen people even mention is, the way we’re using LLMs to code now is largely to use a less precise language (mostly English) to specify what’s often a very precise problem and solution. Why would we think that spoken language is the best interface for doing this?
I'm not so sure I buy the premise that engineers are really dismissing AI because it's still not good enough. At the very least, this framing does not get to the heart of why certain engineers dislike AI.
Many of the people I've encountered who are most staunchly anti-AI are hobbyists. They enjoy programming in their spare time and they got into software as a career because of that. If AI can now adequately perform the enjoyable part of the job in 90% of cases, then what's left for them?
It's good to be skeptical of new ideas as long as you don't box yourself in with dogmatism. If you're young you do this by looking at the world with fresh eyes. If you are experienced you do it by identifying assumptions and testing them.
I wonder how many of us are like me: Just waiting for AI to get Good Enough (TM). The skill required to use AI is probably decreasing, and the AI getting better, so why not just wait? Time will tell.
Exactly, if these tools are going to be so revolutionary and different within the next 6 months and even more so beyond that - there's no advantage to being an early adopter since your progress becomes invalid, may as well wait until it is good enough.
Maybe I've always been a terrible engineer but I'm humble enough to admit the way I code has always been exactly like the LLM. If it's something brand new I'm googling it and pattern matching how to write it. If it's based on existing functionality I'm doing ctrl + f and pattern matching based on that and how to insert the minimal code changes to accomplish the task
The catch-22 I run into with AI coding help is always: it helps the most with problems I know how to solve. I feel like most engineers run into problems where we can't fully articulate the problem we're having (otherwise we would be able to fix it). In which case AI can be helpful, but more in a google way.
I think that is both pretty true but massively underrated in how much faster you can solve the problems you know how to solve. I do also help it finds helps me more quickly learn how to solve new problems, but I must still must learn how to solve these new problems I have it solve those new problems or things go off the rails.
I like learning, I like programming, primarily because it lets me create whatever App I want. I'm continually choosing the most productive languages, IDEs and tooling that lets me be the most productive. I view AI in the same regard, where it lets me achieve whatever I want to create, but much faster.
Sure if you want to learn programming languages for programming sake, then yeah don't Vibe Code (i.e. text prompting AI to code), use AI as a knowledgeable companion that's readily on hand to help you whenever you get stuck. But if your goal is to create Software that achieves your objectives then you're doing yourself a disservice if you're not using AI to its maximum potential.
Given my time on this earth is finite, I'm in the camp of using AI to be as productive as possible. But that's still not everything yet, I'm not using it for backend code as I need to verify every change. But more than happy to Vibe code UIs (after I spend time laying down a foundation to make it intuitive where new components/pages go and API integration).
Other than that I'll use AI where I can (UIs, automation & deployment scripts, etc), I've even switched over to using React/Next.js for new Apps because AI is more proficient with it. Even old Apps that I wouldn't normally touch because it used legacy tech that's deprecated, I'll just rewrite the entire UI in React/Next.js to get it to a place where I can use text prompts to add new features. It took about ~20mins for Claude Code to get the initial rewrite implemented (using the old code base as a guide) then a few hours over that to walk through every feature and prompt it to add features it missed or fix broken functionality [1]. I ended up spending more time migrating it from AWS/ECS/RDS to Hetzner w/ automated backups - then the actual rewrite.
Ha! I just saw one of these this morning on LinkedIn, an engineer complaining about AI / Vibecoding and thought exactly the same. I find these overreactions amusing.
I don’t know why this is so controversial it’s just a tool, you should learn to use it otherwise as the author of this post said you will get left behind but don’t cut yourself on the new tool (lots of people are doing this).
I personally love it because it allows me to create personal tools on the side that I just wouldn’t have had time for in the past. The quality doesn’t matter so much for my personal projects and I am so much more effective with the additional tools I’m able to create.
> I don’t know why this is so controversial it’s just a tool
Do you really "don't know why"? Are you sure?
I believe that ignoring the consequences that commercial LLMs are having on the general public today is just as radical as being totally opposed to them. I can at least understand the ethical concerns, but being completely unaware of the debate on artificial intelligence at this stage is really something that leaves me speechless, let me tell you.
An army of straw men are worse than a single straw man sewn into a good argument fabric. It's just so obvious, which means the argument was never a point.
I'm not going to speculate about the intent and goals of this post and blog, just note that a small sample of posts I've read trigger disagreement for every statement at least for me.
Top comments here also express strong disagreement with multiple statements in the post.
This fits my experience: programmers who are very vocal in their hate of using AI for programming work have in my opinion traits that make them great programmers (but I have to admit that such people often do score not very high on the Agreeableness personality trait :-) ).
it does seem like the skepticism is fading. I do think engineers that outright refuse to use AI (typically on some odd moral principle) are in for a bad time
Many have the attitude of finding one edge case that it doesn’t work well and dismiss AI as useful tool
I’m an early adopter and nowadays all I do is to co-write context documents so that my assistant can generate the code I need
AI gives you an approximated answer, it depends on you how to steer it to a good enough answer and this takes time and learning curve … and evolves really fast
Some people are just not good at constantly learning things
> Many have the attitude of finding one edge case that it doesn’t work well and dismiss AI as useful tool
Many programmers work on problems (nearly) *all day* where AI does not work well.
> AI gives you an approximated answer, it depends on you how to steer it to a good enough answer
Many programmers work on problem where correctness is of essential importance, i.e. if a code block is "semi-right" it is of no use - and even having to deal with code blocks where you cannot trust that the respective programmer did think deeply about such questions is a huge time sink.
> Some people are just not good at constantly learning things
Rather: some people are just not good at constantly looking beyond their programming bubble where AI might have some use.
Jenny, please try to conduct yourself with some sense of decorum here -- These are real people you're bullying. This isn't a hatemonger platform like some of the others. Please try to do better
they called me an idiot in the other thread for pointing out AI is broader than just LLMs (after they called everyone that uses AI an idiot) lol they’re clearly very angry and bitter, and I believe this is not the first account they’ve made to bombard threads with insults. in another comment they advocate for insulting the “AI idiots”
it’s not bullying in that it’s more entertaining than insulting, but still
ah in another comment (I am enjoying reading these):
> Ruthlessly bully LLM idiots
quite openly advocating for “bullying” people that use the bad scary neural nets!
I work in crypto (L1 chain) as a DevOps engineer (LOTS of baremetal, LOTS of CI/CD etc) and it's been amazing to see what Claude can do in this space too.
e.g. had an issue with connecting to AWS S3, gave Claude some of the code to connect and it diagnosed a CREDENTIALS issue without seeing the credentials file nor seeing the error itself. It can even find issues like "oh, you have an extra space in front of the build parameter that the user passed into a Jenkins job". Something that a human might have found in 30+ minutes of grepping, checking etc it found in <30 seconds.
It also makes it trivial to do things like "hey, convert all of the print statements in this python script to log messages with ISO 8601 time format".
Folks talk about "but it adds bugs" but I'm going to make the opposite argument:
The excuse of "we don't have time to make this better" is effectively gone. Quality code that is well instrumented, has good metrics and easy to parse logs is only a few prompts away. Now, one could argue that was the case BEFORE we had AI/LLMs and it STILL didn't happen so I'm going to assume folks that can do clean up (SRE/DevOps/code refactor specialists) are still going to be around.
> gave Claude some of the code to connect and it diagnosed a CREDENTIALS issue without seeing the credentials file nor seeing the error itself
10 years ago google would have had a forum post describing your exact problem with solutions within the first 5 results.
Today google delivers 3 pages of content farm spam with basic tutorials, 30% of them vaguely related to your problem, 70% just containing "aws" somewhere, then stops delivering results.
The LLM is just fixing search for you.
Edit: and by the way, it can fix search for you just because somewhere out there there are forum posts describing your exact problem.
What's a "Code Refactor Specialist"?
Are you implying that in the future we'll have programmers who will just write code using AI and a specialist role whose job it would be to clean up that code? That isn't going to work, you'll need a superhuman for that role. People who write the code using AI have to be the ones who review it and they have to be responsible for the quality of that code.
Yes I remember a while ago it fixing a pipeline problem because I had managed to copy and paste an IP with one of the digits missing at the end. Spent about an hour before that looking at everything else (all the other steps succeeded, but the last one 'timed out', because I copy and pasted it wrong at the end). Took it <30secs as you said to instantly diagnose the problem.
What you suggested here is trivial with existing tools—linters in the first case, search-and-replace functions in editors for the second.
I have yet to see any evidence of the third case. I'm close to banning AI for my junior devs. Their code quality is atrocious. I don't have time for all that cleanup. Write it good the first time around.
'How do I do [X thing I need to accomplish in codebase]?'
"Here is here how you would do that."
Then I apply the code and it's broken and doesn't work.
'The code you supplied is broken because of [xyz], can you fix it?'
"Of course, you're so right! My apologies. Here is the corrected code."
It still doesn't work. Repeat this about 2-3 more times and throw in me deleting the entire chat window and context and trying again in a new chat with slightly different phrasing of question to maybe get it to output the right answer this time, which it often doesn't.
I hate it. It's awful and terrible. Fundamentally, this technology will never not be terrible, because it is just predicting text tokens based off of probabilistic statistics.
We are moving up an abstraction layer. From the perspective of the business, my job is not to write code, my job is to ship products. The language you use to ship products is your tool of choice. Sure, it could be Python or Typescript, but my tool of choice is natural language.
The fact that i hear this mantra over and over again:
"She wrote a thing in a day that would have taken me a month"
This scares me. A lot.
I never found the coding part to be a bottle neck, but the issues arise after the damn thing is in prod. If i work on something big (that will take me a month) thats going to be anywhere from (im winging these numbers) 10K LOC to 25K LOC).
If thats a bechmark for me the next guy using AI will spew out at a bare minimun double the amount of code, and in many cases 3x-4x.
The surface area for bugs are just vastly bigger, and fixing these bugs will eventually take more time than you "won" using AI in the first place.
It really depends on how you use it. I really like using AI for prototyping new ideas (it can run on the background while I work on the main project) and for getting the boring grunt work (such as creating CRUD endpoints on a RESTful API) out of the way. Leaving me more time to focus on the code that really is challenging and need a deeper understanding of the business or the system as a whole.
The boring stuff like crud always needs design. Else you end up with a 2006 era PHP-like "this is a rest api" spaghetti monster. The fact that AI cant do this (and probably never will) is just another showstopper.
I tried AI, but the code it produces (on a higher level) is of really poor quality. Refactoring this is a HUGE PITA.
I keep seeing this over and over by so called "engineers".
You can dismiss the current crop of transformers without dismissing the wider AI category. To me this is like saying that users "dismiss Computers" because they dismiss Windows and instead prefer Linux. Rejecting modern practices for not getting on the microservice hype train or not using React.
Intellisense pre-GPT is a good example of AI that wasn't using transformers.
And of course, you can have both criticise some usages of transformers in IDEs and editors while appreciating and using others.
"My coworker uses Claude Code now. She finished a project last week that would’ve taken me a month". This is one of those generalisations. There is no nuance here. The range of usage from boilerplate to vibe code level is vast. Quickly churning out code is not a virtue. It is not impressive to ship something only to find critical bugs on the first day. Nor is it a virtue using it at the cost of losing understanding of the codebase.
This rigid thinking by devs needs to stop imo. For so called rational thinkers, the development world is rife with dogma and simplistic binary thinking.
If using transformers at any level is cost-effective for all, the data will speak for itself. Vague statements and broad generalisations are not going to sway anyone and will just make this kind of articles sound like validation seeking behaviour.
I'm not even sure there is much room left for one.
There is very little alignment in starting assumptions between most parties in this convo. One guy is coding mission critical stuff, the other is doing throw away projects. One guy depends on coding to put food on table, the other does not. One guy wants to understand every LoC, other is happy to vibe code. One is a junior looking for first job, other is in management in google after being promoted out of engineering. One guy has access to $200/m tech, the other does not. etc etc
We can't even get consensus on tab vs spaces...we're not going to get AI & coding down to consensus or who is "right".
Perhaps a bit a nihilistic & jaded, but I'm very much leaning towards "place your bets & may the odds be ever in your favour".
One has to wonder: why even bother writing a post like this? I’m guessing insecurity.
For what it’s worth, I’m fine with “falling behind.” I didn’t want to be a manager when I was at FAANG, and I don’t want to be an AI puppetmaster now. I got into this field because I actually love programming, and I’ll just keep on doing that while the world gets driven mad by their newfound corporate oracle, thanks. Feel free to lap me with your inscrutable 10,000-line PRs and enjoy your technical debt.
Author doesn't consider the possibility that engineers dismiss AI after they constantly tried it. Not once, not twice, but consistently.
I am one of those dismissers. I am constantly trash talking AI. Also, I have tried more tools and more stress scenarios than a lot of enthusiasts. The high bars are not in my head, they are on my repositories.
Talk is cheap. Show me your AI generated code. Talk tech, not drama.
If you don't see the limitations of vibe coding, I shudder on the idea of maintaining your code even pre-AI.
Do I use it? Yes, a lot, actually. But I also spend a lot of tunning prunning its overly verbose and bizantine code, my esc key is fading from the amount of times I've interrupted it to steer it towards a non-idiotic direction.
It is useful, but if you trust it too much, you're creating a mountain of technical debt.
I've seen some of the dreck that bubbles up on those 'copilot' responses on searches, with glaring inconsistencies on subjects I'm very confident with. If I didn't know better, and used such stuff, debugging would ensue. Frankly, I'd much rather debug my own bugs than those hallucinated by some overly enthusiastic statistical algorithms.
It's probably not unrealistic that a programmer who learns Vim well could be, say, 2x more productive in Vim than in, say, Nano.
Yet programmers who have used Nano were not (at least not significantly) scoffed at or ridiculed. It was their choice of tool, and they were getting work done.
It seems unclear how much more productive AI coding tools can make a programmer; some people claim 10x, some claim it actually makes you slower. But let us suppose it is on average the same 2x productivity increase as Vim.
Why then was using Vim not heralded from every rooftop the same as using AI?
the language in this is entirely smarmy ai boosterisms, all the anti ai arguments it uses are things no real person has ever said and no real person ever would.
We’ve been “losing skills” to better tools forever, and it’s usually been a net positive. Nobody hand-writes a sorting algorithm in production to “stay sharp”, most of us don’t do long division because calculators exist, and plenty of great engineers today couldn’t write assembly (or even manage memory in C) comfortably. That didn’t make the industry worse; it let us build bigger things by working at higher abstraction.
LLM-assisted coding feels like the next step in that same pattern. The difference is that this abstraction layer can confidently make stuff up: hallucinated APIs, wrong assumptions, edge cases it didn’t consider. So the work doesn’t disappear, it shifts. The valuable skill becomes guiding it: specifying the task clearly, constraining the solution, reviewing diffs, insisting on tests, and catching the “looks right but isn’t” failures. In practice it’s like having a very fast junior dev who never gets tired and also never says “I’m not sure”.
That’s why I don’t buy the extremes on either side. It’s not magic, and it’s not useless. Used carelessly, it absolutely accelerates tech debt and produces bloated code. Used well, it can take a lot of the grunt work off your plate (refactors, migrations, scaffolding tests, boilerplate, docs drafts) and leave you with more time for the parts that actually require engineering judgement.
On the “will it make me dumber” worry: only if you outsource judgement. If you treat it as a typing/lookup/refactor accelerator and keep ownership of architecture, correctness, and debugging, you’re not getting worse—you’re just moving your attention up the stack. And if you really care about maintaining raw coding chops, you can do what we already do in other areas: occasionally turn it off and do reps, the same way people still practice mental math even though Excel exists.
Privacy/ethics are real concerns, but that’s a separate discussion; there are mitigations and alternatives depending on your threat model.
At the end of the day, the job title might stay “software engineer”, but the day-to-day shifts toward “AI guide + reviewer + responsible adult.” And like every other tooling jump, you don’t have to love it, but you probably do have to learn it—because you’ll end up maintaining and reviewing AI-shaped code either way.
Basically, I think the author hit just in the point.
As someone whose stance is to be extremely skeptical of AI, I threw Claude at a complex feature request in a codebase I wasn't very familiar with, and it managed to come up with a solution that was 99% acceptable. I was very impressed, so I started using it more.
But it's really a mixed bag, because for the subsequent 3-4 tasks in a codebase that I was familiar with, Claude managed to produce over-commented, over-engineered slop that didn't do what I asked for and took shortcuts in implementing the requirements.
I definitely wouldn't dismiss AI at this point because it occasionally astounds me and does things I would never in my life have imagined possible. But at other times, it's still like an ignorant new junior developer. Check back again in 6 months I guess.
I am neither pro- nor anti-AI. I just don't like the manipulative and blackmailish tactics its proponents use to get me to use it. I will use it whenever I find it useful, not because you tell me I'm getting "left behind" by not adopting it.
> The gap is widening between engineers who’ve integrated these tools and engineers who haven’t.
Let‘s wait with the evaluation until the honeymoon phase is over.
At the moment there are plenty of companies that offer cheap AI tools. It will not stay that way.
At the moment most of their training data is man made and not AI made which makes AIs worse if used for training.
It will not stay that way.
IMO Those screencasts work because they are painstakingly planned toy projects from scratch
Even without AI you cannot do a tight 10 minute video on legacy code unless you have done a lot of work ahead of time to map it out and then what’s the point
That would be fantastic. I’ve seen so many claims like the author’s
> [Claude Code and Cursor] can now work across entire codebases, understand project context, refactor multiple files at once, and iterate until it’s really done.
But I haven’t seen anyone doing this on e.g. YouTube? Maybe that kind of content isn’t easy to monetize, but if it’s as easy to use AI as everyone says surely someone would try.
> if it’s as easy as everyone says surely someone would try.
Yeah, 18 months ago we were apparently going to have personal SaaSes and all sorts of new software - I don't see anything but an even more unstable web than ever before
I would never have had a working LoongArch emulator in 2 weeks at the kind of quality that I desire without it. Not because it writes perfect code, but because it sets everything up according to my will, does some things badly, and then I can take over and do the rest. The first week I was just amending a single commit that set everything up right and got a few programs working. A week after that it runs on multiple platforms with JIT-compilation. I'm not sure what to say, really. I obviously understand the subject matter deeply in this case. I probably wouldn't have had this result if I ventured into the unknown.
Although, I also made it create Rust and Go bindings. Two languages I don't really know that well. Or, at least not well enough for that kind of start-to-finish result.
Another commenter wrote a really interesting question: How do you not degrade your abilities? I have to say that I still had to spend days figuring out really hard problems. Who knew that 64-bit MinGW has a different struct layout for gettimeofday than 64-bit Linux? It's not that it's not obvious in hindsight, but it took me a really long time to figure out that was the issue, when all I have to go on is something that looks like incorrect instruction emulation. I must have read the LoongArch manual up and down several times and gone through instructions one by one, disabling everything I could think of, before finally landing on the culprit just being a mis-emulated kind-of legacy system call that tells you the time. ... and if the LLM had found this issue for me, I would have been very happy about it.
There are still unknowns that LLMs cannot help with, like running Golang programs inside the emulator. Golang has a complex run-time that uses signal-based preemption (sysmon) and threads and many other things, which I do emulate, but there is still something missing to pass all the way through to main() even for a simple Hello World. Who knows if it's the ucontext that signals can pass or something with threads or per-state signal state. Progression will require reading the Go system libraries (which are plain source code), the assembly for the given architecture (LA64), and perhaps instrumenting it so that I can see what's going wrong. Another route could be implementing an RSP server for remote GDB via a simple TCP socket.
As a conclusion, I will say that I can only remember twice I ditched everything the LLM did and just did it myself from scratch. It's bound to happen, as programming is an opinionated art. But I've used it a lot just to see what it can dream up, and it has occasionally impressed. Other times I'm in disbelief as it mishandles simple things like preventing an extra masking operation by moving something signed into the top bits so that extracting it is a single shift, while sharing space with something else in the lower bits.
Overall, I feel like I've spent more time thinking about more high-level things (and occasionally low-level optimizations).
> if you haven’t tried modern AI coding tools recently, try one this week.
I don’t think I will. I am glad I have made the radical decision, for myself, to wilfully remain strict in my stance against generative AI, especially for coding. It doesn’t have to be rational, there is good in believing in something and taking it to its extreme. Some avoid proprietary software, others avoid eating sentient beings, I avoid generative AI on pure principle.
This way I don’t have to suffer from these articles that want to make you feel bad, and become almost pleading, “please use AI, it’s good now, I promise” which I find frankly pathetic. Why do people care so much about it to have to convince others in this sad routine? It honestly feels like some kind of inferiority complex, as if it is so unbearable that other people might dislike your favourite tool, that you desperately need them to reconsider.
It's been somewhat disheartening to see many techie spaces (also HackerNews) become so skeptical and anti AI. It's as-if the luddites are at it again and they're just refuting progress because of a bad impression or because they fear the consequences.
AI is a tool and it shuold be treated as such.
Also, beware of snake oil salesmen.
Is AI going to integrate widely into the world? Yes.
Is it also going to destroy all the jobs in the world? Of course not, luddites don't understand the naïvety of this position.
And even if LLMs turn out to really be a net positive and a requirement for the job, they're antithetical to what most software developers appreciate and enjoy (precision, control, predictability, efficiency...).
There sure seems to be two kinds of software developers: those who enjoy the practice and those who're mostly in for the pay. If LLMs win it will be second ones who'll stay on the job, and that's fine; it won't mean that the first group was made of luddites, but that the job has turned into crap that others will take over.
The two categories of software developers you mention already existed pre ChatGPT and will likely continue to exist. If anything, AI's going to make those who're in it just for the money much less relevant.
Do you really think that Software Engineering is going to be less about precision, control, predictability, and efficiency? These are fundamental skills regardless of AI.
Yeah it boggles my mind all the people on here constantly dismissing LLMs.
It's very clearly getting better and better rapidly. I don't think this train is stopping even if this bubble bursts.
The cold ass reality is: We're going to need a lot less software engineers moving forward. Just like agriculture now needs way less humans to do the same work than in the past.
I hate to be blunt but if you're in the bottom half of the developer skill bell curve, you're cooked.
If you hate reading other people's code, then you'll hate reading llm generated code, then all you'll ever be with ai at best is yet another vibe coder who produces piles of code they never intend to read, so you should have found another career even before llms were a thing.
Responsible use of ai means reading lots and lots of generated code, understanding it, reviewing and auditing it, not "vibe coding" for the purpose of avoiding ever reading any code.
> If you hate reading other people's code, then you'll hate reading llm generated code, then all you'll ever be with ai at best is yet another vibe coder who produces piles of code they never intend to read, so you should have found another career even before llms were a thing.
I do like to read other people's code if it is of an exceptional high standard. But otherwise I am very vocal in criticizing it.
I rely on AI coding tools. I don’t need to think about it to know they’re great. I have instincts which tell me convenience = dopamine = joy.
I tested ChatGPT in 2022, and asked it to write something. It (obviously) got some things wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I've forgotten that lesson. Why wouldn't I? I've been offloading all sorts of meaningful cognitive processes to AI tools since then.
I use Claude Code now. I finished a project last week that would’ve taken me a month. My senior coworker took one look at it and found 3 major flaws. QA gave it a try and discovered bugs, missing features, and one case of catastrophic data loss. I call that “nitpicking.” They say I don’t understand the engineering mindset or the sense of responsibility over what we build. (I told them it produces identical results and they said I'm just admitting I can't tell the difference between skill and scam).
“The code people write is always unfinished,” I always say. Unlike AI code, which is full of boilerplate, adjusted to satisfy the next whim even faster, and generated by the pound.
I never look at Stack Overflow anymore, it's dead. Instead I want the info to be remixed and scrubbed of all its salient details, and have an AI hallucinate the blanks. Thay way I can say that "I built this" without feeling like a fraud or a faker. The distinction is clear (well, at least in my head).
Will I ever be good enough to code by myself again? No. When a machine showed up that told me flattering lies while sounding like a silicon valley board room after a pile of cocaine, I jumped in without a parachute [rocket emoji].
I also personally started to look down on anyone who didn't do the same, for threatening my sense of competence.
from some of the engineers I've debated this over, I think some of them have just dug their heels in at this point and have decided they're never going to use LLM tools period, and are just clinging to the original arguments without really examining the reality of the situation. In particular this "The LLM is going to hallucinate subtle bugs I can't catch" one. The idea that LLMs make subtle mistakes that are somehow more subtle, insidious and uncatchable compared to any random 25 pull requests you get from humans is simply ridiculous. The LLM makes mistakes that stick out to you like a sore thumb, because they're not your mistakes. The hardest mistakes to catch are your own, because your thinking patterns are what made them in the first place.
The biggest problem with LLMs for code that is ongoing is that they have no ability to express low confidence in solutions where they don't really have an answer, instead they just hallucinate things. Claude will write ten great bash lines for you but then on the eleventh it will completely hallucinate an option on some linux utility you hardly have time to care about, where the correct answer is "these tools don't actually do that and I dont have an easy answer for how you could do that". At this point I am very keen to notice when Claude gets itself into an endless ongoing loop of thought that I'm going about something the wrong way. Someone less experienced would have a very hard time recognizing the difference.
> The idea that LLMs make subtle mistakes that are somehow more subtle, insidious and uncatchable compared to any random 25 pull requests you get from humans is simply ridiculous.
This is plainly true, and you are just angry that you don't have a rebuttal
I didnt say the LLM does not make mistakes, I said the idea that a reviewer is going to miss them at some rate that is any different from mistakes a human would make, is ridiculous.
Missing in these discussions is what kinds of code people are talking about. Clearly if we're talking about a dense, highly mathematical algorithm, I would not have an LLM anywhere near that. We are talking about day-to-day boilerplate / plumbing stuff. The vast majority of boring grunt work that is not intellectually stimulating. If your job is all Carnegie-Mellon level PHD algorithm work, then good for you.
edit: I get that it looks like you made this account four days ago to troll HN on AI stuff. I get it, I have a bit of a mission here to pointedly oppose the entrenched culture (namely the extreme right wing elements of it). But your trolling is careless and repetitive enough that it looks like.....is this an LLM account instructed to troll HN users on LLM use ? funny
For me, however, there is one issue: how can I utilize AI without degenerating my own abilities? I use AI sparingly because, to be honest, every time I use AI, I feel like I'm getting a little dumber. I fear that excessive use of AI will lead to the loss of important skills on the one hand and create dependencies on the other. Who benefits if we end up with a generation of software developers who can no longer program without AI? Programming is not just writing code, but a process of organizing, understanding, and analyzing. What I want above all is AI that helps me become better at my job and continue to build skills and knowledge, rather than making me dependent on it.
The shareholders benefit this quarter. Look man, I know you probably have a high opinion of yourself and all, but your job now is to degrade your abilities in order to deliver faster results now. The investors kindly demand that you get with the program, enthusiastically accept your new role as a depreciating asset (not human capital to be invested in), and stop thinking so much.
Apologists love to make such analogies. "From 30,000 feet, doesn't the new things kinda look like some old thing? Then they're the same and you should accept the new thing!" But the analogies are never apt, and the "argument" is really only one of glossing over the differences.
The whole point of AI is for people to think less. It's basically the goddamned name. If people aren't thinking less, AI isn't doing its job. All of those things you listed are instances of mechanical translation, and aren't thinking..
> You can put a strong local coding model on your local hardware today and no investor will be involved (unless you mean investors on the company you work for, but the truth is,n those were never in any way interested in how you build things, only that you do).
Don't pretend you can cosplay a capitalist with AI. You need money, and if you can build something with a local model, the people with money can do it too, so they don't have to pay you. We work for a living.
Also it's a fantasy your local model with be anything but a dim candle to the ones the rich have. Real life is not a sci-fi novel.
Your employers are hoping to use you up making this current imperfect iteration of the technology work, because the hope is the next version won't need you. Don't be cheerful about it. It's a bad end for you.
Will what I do now be automated eventually or before long? Probably, we keep automating things, so one has to swim up the abstraction layers. Doesn't mean one has to think less.
To use them well you still need to know everything - whenever you prompt lazily you're opening yourself up to a fuckton of technical debt.
That might be acceptable to some, but is generally a bad idea
If it was AGI, you'd be right though...
My cynical view is you can't, and that's the point. How many times before have we seen the pattern of "company operates at staggering losses while eliminating competition or becoming entrenched in enough people's lives, and then clamps down to make massive profits"?
Indeed I believe that, but in my experience these skills get more and more useless in the job market. In other words: retaining such (e.g. low-level coding) skills is an intensively practises hobby of such people that is (currently) of "no use" in the job market.
Compare ROI of that to being able to get kinda the software you need in a few hours of prompting; it's a new paradigm, progress is (still) exponential and we don't know where exactly things will settle.
Experts will get scarce and very sought after, but once they start to retire in 10-20-30 years... either dark ages or AI overlords await us.
With that the students were more directly a match for the in-demand roles, but reality is that other roles will see a reduction of supply.
The question here is: Will there be a need in the future for people who can actually code?
I think so. I also believe the field is evolving and that the pendulum always swings to extremes. Right now we are just beginning to see the consequences of the impact of AI on stability & maintainability of software. And we have not seen the impact of when it catastrophically goes wrong.
If you, together with your AI buddy, cannot solve the problem on this giant AI codebase, pulling in a colleague probably isn't going to help anymore.
The amount of code that is now being generated with AI (and accepted because it looks good enough) is causing long-term stability to suffer. What we are seeing is that AI is very eager to make the fixes without any regard towards past behavior or future behavior.
Of course, this is partially prevented by having better prompts, and human reviews. But this is not the future companies want us to go. They want us to prompt and move on.
AI will very eagerly create 10,000 pipes from a lake to 10,000 houses in need of water. And branch off of them. And again.
Until one day you realize the pipes have lead in them and you need to replace them.
Today this is already hard. With AI it's even harder because there is no unified implementation somewhere. It's all copy pasted for the sake of speed and shipping.
I have yet to see a Software Engineer who stands behind every line of code produced to be faster on net-new development using AI. In fact, most of the time they're slower because the AI doesn't know. And even when they use AI the outcome is worse because there is less learning. The kind of learning that eventually pushes the boundaries in 'how can we make things better'.
Couldn't the same statement, to some extent, be applied to using a sorting lib instead of writing your own sorting algorithm? Or how about using a language like python instead of manually handling memory allocation and garbage collection in C?
> What I want above all is AI that helps me become better at my job and continue to build skills and knowledge
So far, on my experience, the quality of what AI outputs is directly related to the quality of the input. I've seen some AI projects made by junior devs that a incredibly messy and confusing architecture, despite they using the same language and LLM model that I use? The main difference? The AI work was based on the patterns and architecture that I designed thanks to my knowledge, which also happens to ensure that the AI will produce less buggy software.
I think the problem the OP is trying to get at is that if we only program at the level of libs we lose the ability to build fundementally cooler/better things. Not everyone does that of course but AI is not generating fundementally new code its copy pasting. Copy Pasting has its limits especially for people in the long term. Copy paste coders don't build game engines. They don't write operating systems. These are esototeric to some people as how many people actually write those things! But there is a craftsmanship lost in converting more people to Copy Paste all be it with inteligence.
I personally lean on the side that this type of abstraction over thinking is problematic long term. There is a lot damage being done on people not necessiarly in Coding but in Reading/Writing especially in (9-12 grade + college). When we ask people to write essays and read things, AI totally short circuits the process but the truth is no one gets any value in the finished product of an essay about "Why columbus coming to the new world cause X,Y or Z". The value is from the process of thinking that used to be required to generate that essay. This is similar to the OPs worry. You can say well we can do both and think about it as we review AI outputs. But human's are lazy. We don't mull over the calculator thinking about how some value is computed something we take it and run. I think there is lot more value/thinking in the application of the calculated results so calculator didn't destroy mathematical thinking but the same is not necessiarly true in how AI is being applied. The fact of your observation of inn junior dev's output proves support to my view. We are short circuiting the thinking. If those juniors can learn the patterns than there is no issue but it's not guarenteed. I think the uncertainity is the the OPs worry but maybe restated in a better way.
Love to hear your thoughts!
But at this point, it's like refusing to use vehicles to travel long distances in fear of becoming physicaly unfit. We go to the gym.
> They go on to discuss what is good or bad in writing. Socrates tells a brief legend, critically commenting on the gift of writing from the Egyptian god Theuth to King Thamus, who was to disperse Theuth's gifts to the people of Egypt. After Theuth remarks on his discovery of writing as a remedy for the memory, Thamus responds that its true effects are likely to be the opposite; it is a remedy for reminding, not remembering, he says, with the appearance but not the reality of wisdom. Future generations will hear much without being properly taught, and will appear wise but not be so, making them difficult to get along with.
https://en.wikipedia.org/wiki/Phaedrus_(dialogue)
This gets particularly tricky when the task requires a competency that you yourself lack. But here too the question is - would you be willing to delegate it to another human whom you don't fully trust (e.g. a contractor)? The answer for me is in many cases "yes, but I need to learn about this enough so that I can evaluate their work" - so that's what I do, I learn what I need to know at the level of the tech lead managing them, but not at the level of the expert implementing it.
That said, as long as there’s the potential for AI to hallucinate, we’ll always need to be vigilant - for that reason I would want to keep my programming skills sharp.
AI assisted software building by day, artisanal coder by night perhaps.
Secondly, bloated comes in so many forms and they all have different reasons. Did you mean bloated as in huge dependency installs like those node modules? Or did you mean an electron app where a browser is bundled? Or perhaps you mean the insane number of FactoryFactoryFactoryBuilder classes that Java programmers have to bear with because of misguided overarchitecting? The 7 layer of network protocols - is that bloating?
These are human decisions - trade-offs between delivering values fast and performance. Foundational layers are usually built with care, and the right abstractions help with correctness and performance. At the app layers, requirements change more quickly and people are more accepting of performance hits, so they pick tech stacks that you would describe as bloated for faster iteration and delivery of value.
So even if I used abstraction as an analogy, I don’t think that automatically implies AI assisted coding will lead to more bloat. If anything it can help guide people to proper engineering principles and fit the code to the task at hand instead of overarchitecting. It’s still early days and we need to learn to work well with it so it can give us what we want.
I'm not too worried about degrading abilities since my fundamentals are sound and if I get rusty due to lack of practice, I'm only a prompt away from asking my expert assistant to throw down some knowledge to bring me back up to speed.
Whilst my hands on programming has reduced, the variety of Software I create has increased. I used to avoid writing complex automation scripts in bash because I kept getting blocked trying to remember its archaic syntax, so I'd typically use bun/node for complex scripts, but with AI I've switched back to writing most of my scripts in bash (it's surprising at what's capable in bash), and have automated a lot more of my manual workflows since it's so easy to do.
I also avoided Python because the lack of typing and api discovery slowed me down a lot, but with AI autocomplete whenever I need to know how to do something I'll just write a method stub with comments and AI will complete it for me. I', now spending lots of time writing Python, to create AI Tools and Agents, ComfyUI Custom Nodes, Image and Audio Classifiers, PIL/ffmpeg transformations, etc. Things I'd never consider before AI.
I also don't worry about its effects as I view it as inevitable, with the pendulum having swung towards code now being dispensable/cheap to create, what's more important is velocity and being able to execute your ideas quickly, for me that's using AI where I can.
This is not a new problem I think. How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?
If you're not careful and always rely on them as a crutch, they'll remain just that; without actually "incrementing" you.
I think this is a very good question. How should we actually be using our tools such that we're not degenerating, but growing instead?
By writing down every foreign word/phrase that I don't know, and adding a card for it to my cramming card box.
"Coding in the Red-Queen Era" https://corecursive.com/red-queen-coding/ (2025)
As humans we have developed tools to ease our physical needs (we don’t need to run, walk or lift things) and now we have a tool that thinks and solve problems for us
Personally I think my skill lies in solving the problem by designing and implementing the solution, but not how I code day-to-day. After you write the 100th getter/setter you're not really adding value, you're just performing a chore because of language/programming patterns.
Using AI and being productive with it is an ability and I can use my time more efficiently than if I were not to use it. I'm a systems engineer and have done some coding in various languages, can read pretty much anything, but am nowhere near mastery in any of the languages I like.
Setting up a project, setting up all the tools and boilerplate, writing the main() function, etc are all tasks that if you're not 100% into the language take some searching and time to fiddle. With AI it's a 2-line prompt.
Introducing plumbing for yet another feature is another chore: search for the right libraries/packages, add dependencies, learn to use the deps, create a bunch of files, sketch the structs/classes, sketch the methods, but not everything is perfectly clear yet, so the first iteration is "add a bunch of stuff, get a ton of compiler warnings, and then refine the resulting mess". With AI it's a small paragraph of text describing what I want and how I'd like it done, asking for a plan, and then simply saying "yes" if it makes sense. Then wait 5-15m. Meanwhile I'm free to look at what it's doing and if it's doing something stupid wrong, or think about the next logical step.
Normally the result for me has been 90% good, I may need to fix a couple things I don't like, but then syntax and warnings have already been worked out, so I can focus on actually reading, understanding and modifying the logic and catching actual logic issues. I don't need to spend 5+ days learning how to use an entire library, only to find out that the specific one I selected is missing feature X that I couldn't foresee using last week. That part takes now 10m and I don't have to do it myself, I just bring the finishing touches where AI cannot get to (yet?).
I've found that giving the tool (I personally love Copilot/Claude) all the context you have (e.g. .github/copilot-instructions.md) makes a ton of difference with the quality of the results.
So what's my cut of something basically worthless? Doesn't seem lucrative in the long run.
Prices are not fundamental truths. They’re numbers that happen to work. Ideally price > cost, but that’s not even reliably true once you factor in fixed costs, subsidies, taxes, rebates, etc. Boeing famously came out and said they couldn't figure out how much it actually cost to make a 747, back when they were still flying.
Here's a concrete example: You have a factory with $50k/month in fixed costs. Running it costs $5 per widget in materials and labor. You make 5,000 widgets.
Originally you sell them for $20. Revenue $100k, costs $75k, pocket a cool $25k every month. Awesome.
Then, a competitor shows up and drives the price down to $10. Now revenue is $50k. On paper you “lose money” vs your original model.
But if you shut the factory down, you still eat the full $50k fixed cost and make $0. If you keep running, each widget covers its $5 marginal cost and contributes $5 toward fixed costs. You break even instead of losing $50k.
That’s the key mistake in "AI output is worth zero." Zero marginal value does not imply zero economic value. The question is whether it covers marginal cost and contributes to something else you care about: fixed costs, distribution, lock-in, differentiation, complements, optionality.
We've faced this many times before so AI isn't special in this regard. It just makes the gap between marginal cost and perceived value impossible to ignore.
you on another comment in here lol you’re blinded by the hate! let it flow through you! insult the users of the bad neural nets until the world heals! don’t back down!
In my experience, AI users are meaningfully stupider and more gullible than the rest of the population.
…or change your behavior and be a better person, whatever works
btw I’ve been doing “the AI” stuff since 2018 in industry and before in academia. I find your worldview displayed in your comments to be incredibly small and easily dismissible
They dismiss the religion like hype machine.
You want to market to engineers, stick to provable statements. And address some of their concerns. With something other than "AI is evolving constantly, all your problems will be solved in 6 months, just keep paying us."
Oh by the way, what is the OP trying to sell with these FOMO tactics? Yet another ChatGPT frontend?
Really though, the potential in this tech is unknown at this point. The measures we have suggest there's no slowdown in progress, and it isn't unreasonable for any enthusiast or policy maker to speculate about where it could go, or how we might need to adjust our societies around it.
What is posted to HN daily is beyond speculation. I suppose a psychologist has a term for what it is, I don't.
Edit: well, guess what? I asked an "AI":
Psychological Drivers of AI Hype:
By the way, the text formatting is done by the "AI" as well. Asked it to make the table look like a table on HN specifically.As a 0.1x low effort Hacker News user who can't lift a pinky to press a shift or punctuation key, you should consider using AI to improve the quality of your repetitive off-topic hostile vibe postings and performative opinions.
Or put down the phone and step away from the toilet.
AI is for idiots
And you just unwittingly proved my point, so I'm downgrading you to an 0.01x low effort Hacker News user.
If there are no other effects of AI than driving people like you out of the industry, then it's proven itself quite useful.
Edit: ok I will concede that point to you that I was mistaken about 0.01x, for candidly admitting (and continuously providing incontrovertible proof) that you're only a 0.001x low effort Hacker News user. I shouldn't have overestimated you, given all the evidence.
You download a tool written by a human, you can reasonably expect that it does what the author claims it does. And more, you can reasonably expect that if it fails it will fail in the same way in the same conditions.
I wrote some Turing Machine programs back in my Philosophy of Computer Science class during the 80's, but since then my Turing Machine programming skills have atrophied, and I let LLMs write them for me now.
And that's where the "AI" is lacking.
"AI can write a testcase". Can it write a _correct_ test case (i.e. one that i only have to review, like i review my colleague work) ?
"AI can write requirements". Now, that i'm still waiting to see.
And is the test case useful for something? On non textbook code?
AI developers are 0.1x "engineers"
Allow me to repeat myself: AI is for idiots.
Fully expect them to include youtube levels of advertising in 1-2 years though. Just to compensate for the results being somewhat not spammy.
Or perhaps Kagi, but apparently they only give you the tools to exclude the content farms, they don't exclude them themselves.
One does not need to embrace a tool to recognize its horrendous effects and side-effects. I can critique assault rifles without ever having handled one. I can critique street narcotics without taking drugs, and I can critique nuclear weapons without suffering from a blast personally. The idea that if you don't use a tool you can't form conclusions about why it's bad is devoid of any factual or historical grounding. It's an empty rhetorical device which can be used endlessly.
Literally 100% of the inventions which come out now or in the future can be handled in the exact same way. Somebody invents a robot arm that spoon feeds you so you never need to feed yourself with your own hand ever again? Oh this is revolutionary, everybody's going to install this in their home! What, you haven't? And you think it's a bad idea? Gosh, you're so backwards and foolish. The world is moving on, don't be left behind with your manual hand feeding techniques.
This article is like 1.5 years out of date. The discourse around genAI as a tech movement and its nearly uniformly terrible outcomes has moved on. OP hasn't. Seems the gap is widening between writers who are talking about these tools soberly and seriously, and writers who aren't.
But, to be fair, that wasn't the kind of critique it was talking about. If your critique guns is moral, strategic, etc, then yes, you can do it without actually trying out guns. If your critique is that guns physically don't work, don't actually do the thing they are claimed to do, then some hands-on testing would quickly dispel that notion.
The article is talking about those kinds of critiques, ones of the "AI doesn't work" variety, not "AI is harmful".
The OP doest't even deserve your sincere and just to the point response.
Especially the part about somebody completing a project quickly being depicted as "cheating". Absurd.
Imagine investing tons of efforts and money into a startup, just to get a clone a week after launch, or worse - before your launch.
- google maps
- power tools
- complex js frameworks
- ORMs
- the electrical grid (outages are a thing)
- and so on…
This isn’t a new problem unique to LLMs.
Practice using the tool intelligently and responsibly, and also work to maintain your ability to function without when needed.
These tools are seriously starting to become actually useful, and I’m sorry but people aren’t lying when they say things have changed a lot over the last year.
Sometime the result is not great, sometimes it requires manual updates, sometimes it just goes into a wrong direction and we just discard the proposal. The good thing is you can initiate such a large change, go get a coffee, and when you're back you can take a look at the changes.
Anyway, overall those tools are pretty useful already.
It doesn't matter if you're using AI or not, just like it never mattered if you were using C or Java or Lisp, or using Emacs or Visual Studio, or using a debugger or printf's, or using Git or SVN or Rational ClearCase.
What really matters is in the end is, what you bring to market, and what your audience thinks of your product.
So use all the AI you want. Or don't use it. Or use it half the time. Or use it for the hard stuff, but not the easy stuff. Or use it for the easy stuff, but not the hard stuff. Whatever! You can succeed in the market with AI-generated product; you can fail in the market with AI-generated product. You can succeed in the market with human-generated product; you can fail in the market with human-generated product.
It's full of bloat; Unused http endpoints, lots of small utility functions that could have been inlined (but now come with unit tests!), missing translations, only somewhat correct design...
The quality wasn't perfect before, now it has taken a noticeable dip. And new code is being added faster than ever. There is no way to keep up.
I feel that I can either just give in and stop caring about quality, or I'll be fixing everyone else's AI code all of my time.
I'm sure that all my particular colleagues are just "holding it wrong", but this IS a real experience that I'm having, and it's been getting worse for a couple of years now.
I am also using AI myself, just in a much more controlled way, and I'm sure there's a sweet spot somewhere between "hand-coding" and vibing.
I just feel that as you inch in on that sweet spot, the advertised gains slowly wash away, and you are left with a tangible, but not as mindblowing improvement.
In my line of work, I keep seeing it generate sloppy state machines with unreachable or superfluous states, bad floating-point arithmetic, and especially trying to do everything in the innermost loop of a nested iteration.
It also seems to love hallucinating Qt features that should exist but don’t, which I find mildly amusing.
There is also a big, uncomfortable truth regarding "AI" coding tools: They are trained on open-source code, yet they ignore the licenses attached to that code. If it's unethical for me to copy-and-paste MIT licensed code without including the license text, then it's unethical to let an LLM do it on my behalf.
LLMs are paving the way to a dystopia where there's no motivation for humans to create, and that world sounds miserable.
My aunt was born in the 1940s, and was something of an old fashioned feminist. She didn't know why wasn't allowed to wear pants, or why she had to wait for the man to make the first move, etc. She tells a story about a man who ditched her at a dance once because she didn't know the "latest dance." Apparently in the 1950s, some idiot was always inventing a new dance that everyone _just had follow_. The young man was so embarrassed that he left her at the dance.
I still think about this story, and think about how awful it would have been to live in the 40s. There always has been social pressure and change, but the "everyone's got to learn new stupid dances all the time" sort of pressure feels especially awful.
This really reminds me of the last 10-20 years in technology. "Hey, some dumb assholes have built some new technology, and you don't really have the choice to ignore it. You either adopt it too, or are left behind."
Predicting the future, I can tell you with certainty that This Too Shall Pass.
Just don't ask me what 'This' is: A fad, or a sea-change?
My main Luddite objection to the current passion for LLM coding assistance is the new dependencies created/fostered within the software engineering diaspora. Not only are we dependent now on cloud access for so much of our SE now, but we'll also depend on the ongoing build-out, as LLM infrastructure bulldozes our real-world landscape, and we experience change from DRAM shortages to unwanted (NIMBY) construction of data centers. All based on the idea (yet again) that this is it.
The question becomes, is it (or will it be) worth it?
My personal prism of past experience does not lend itself to an easy answer. The move from assembly language to C was a no-brainer, but I remember resisting a transition from C to C++ ("I can do that in C with structs and function pointers") and the surge in OOP and COM and... the list goes on.
I remember objecting to Rational Rose and UML because I just didn't trust code generated algorithmically. Boilerplate with artifacts. I don't think I was wrong to hesitate there.
But I might be wrong now, to let others push the leading bleeding edge. Maybe it's time to get into it.
Can I just download a trained LLM and host it myself, without a dependency on internet/cloud corporate/overlord/rented-infrastructure?
I am willing to try, but I must declare that-- where I am, the Personal Computing revolution is not over. We still haven't won. And the rebellion: against auto-updates, telemetry, subscription models, any usage or dependency of/on your internet against your will. The fight for freedom goes on.
Can I get a Claude Code to live in my home with me, air-gapped and all mine?
Will I spend all my time spanking the agent?
Honest question for the engineers here. Have you seen this happening at your company? Are strong engineers falling behind when refusing to integrate AI into their workflow?
One nice change however is that you can guide the latter towards a total refactor during code review and it takes them a ~day instead of a ~week.
A guy will proudly deploy something he vibe coded, or “write the documentation” for some app that a contractor wrote, and then we get someone in the business telling us there’s a bug because it doesn’t do what the documentation says, and now I’m spending half a day in meetings to explain and now we have a project to overhaul the documentation (meaning we aren’t working on other things), all because someone spent 90 seconds to have AI generate “documentation” and gave themselves a pat on the back.
I look at what was produced and just lay my head down on the desk. It’s all crap. I just see a stream of things to fix, convention not followed, 20 extra libraries included when 2 would have done. Code not organized, where this new function should have gone in a different module, because where it is now creates tight coupling between two modules that were intentionally built to not be coupled before.
It’s a meme at this point to say, ”all code is tech debt”, but that’s all I’ve seen it produce: crap that I have to clean up, and it can produce it way faster than I can clean it up, so we literally have more tech debt and more non-working crap than we would have had if we just wrote it by hand.
We have a ton of internal apps that were working, then someone took a shortcut and 6 months later we’re still paying for the shortcut.
It’s not about moving faster today. It’s about keeping the ship pointed in the right direction. AI is a guy a guy on a jet ski doing backflips, telling is we’re falling behind because our cargo ship hasn’t adopted jet skis.
AI is a guy on his high horse, telling everyone how much faster they could go if they also had a horse. Except the horse takes a dump in the middle of the office and the whole office spends half their day shoveling crap because this one guy thinks he’s going faster.
This is kind of the fundamental disagreement in the whole discourse isn't it? If you could prove this is true, a lot of arguments stop making sense from the anti AI people, though not all of them. But nobody has proved this. And what is the gap? If the gap is in skill, the AI users are falling behind. If it's productivity, 1. Prove it, 2. is it more in my self interest to be highly productive or to be highly skilled?
Personally, I am already able to work 5 hours a week and convince my boss it's 40, with glowing performance reviews; I am just that productive. And I don't want to use AI. So if you lads can go ahead and gain 8x productivity and make me work a full job to compete, oh well, I should do that anyway.
I also find that the actual coding is important. The typing may not be the most ineresting bit, but it's one of the steps that helps refine the architecture I had in my head.
That happens to produce good code as a side effect. And a chat bot is perfect for this.
But my obsession is not with output. Every time I use AI agents, even if it does exactly what I wanted, it’s unsatisfying. It’s not sometning I’m ever going to obsess over in my spare time.
What helps me is:
- Prefer faster models like VSCode's Copilot Raptor Mini which, despite the name, is like 80% capable of what Sonnet 4.5 is. And is much faster. It is a fine tunned GPT 5 mini.
- Start writting the next prompt while LLMs work or keep pondering about the current problem at hand. This helps our chaotic brain to keep focused.
I feel like I'm living in a different world when every time a new model comes out, everyone is in awe, and it scores exceptionally well on some benchmark that no one heard of before before the model even launched. And then when I use it, it feels like it's exactly the same as all models before, and makes the same stupid mistakes as always.
But neither can most humans.
I admit to being surprised at what it actually can do, pretty much all by itself.
https://arstechnica.com/ai/2025/12/the-ars-technica-ai-codin...
I don't know what other programmers are doing, but a lot of my time is spent on tasks like this.
Here's another random task: Write an analytic ray - cubic Bézier patch intersection routine based on the based on the "Ray Tracing Parametric Patches", SIGGRAPH 82 paper. This is a task I did as part of my final project for my undergraduate graphics class.
These are both straightforward tasks to take well-described existing algorithms from literature and implement them concretely. Very few design choices to consider. In theory it ought to be right up the alley for what AI is supposedly good for.
Write a blog post about with an intent to shame software engineers who dismiss AI, but never do it openly, pretending to be a direction for engineers who are on a wrong path. Propose a series of titles to make it even look worse for such engineers, but not openly offensive. Start with a caricature of a bad engineer, from first person POV, who doesn't want even try AI, with various reasons, but not disclosing real LLM problems. Then switch to a normal engineer, who admits above was an act, but again shows his colleagues statements about AI, like real ones, which would show how bad and lazy they are, how they don't adopt to new era, real examples, but again not going into real problems like brain rot etc. Make an inpression of a leading edge engineer who tries to save his colleagues, kind of agree with them about AI issues, but in the end show it's their attitude and not AI is the problem.
Here's the list of titles is proposed:
* The Engineer Who Refused to Look Up
* When Experience Becomes a Blindfold
* A Gentle Note to Engineers Who’ve Already Made Up Their Minds
* On Confident Opinions Formed a Little Too Early
* The Curious Case of Engineers Who Are Certain They’re Right
The rest of the article is "curiously" familiar.
If you disagree with the above statement, try replacing "AI" with "Docker", "Kubernetes", "Microservices architecture", "NoSQL", or any other tool/language/paradigm that was widely adopted in the software development industry until people realized it's awesome for some scenarios but not a be-all and end-all solution.
How many people are actually saying this? Also how does one use modern coding tools in heavily regulated contexts, especially in Europe?
I can't disagree with the article and say that AI has gotten worse because it truly hasn't, but it still requires a lot of hand holding. This is especially true when you're 'not allowed' to send the full context of a specific task (like in health care). For now at least.
But I see where things are going. I tried some of the newer tooling over the past few weeks. They’re too useful to ignore now. It feels like we’re entering into an industrial age for software.
I can't imagine being so eager to socially virtue signal. Presumably some greybeard told him it was a waste of time and it upset him
I don't have a beard, but if I did I'm sure it would be white, beyond grey.
It's okay. It's okay to feel annoyed, you have a tough battle ahead of you, you poor things.
I may be labelled a grey beard but at least I get to program computers. By the time you have a grey beard maybe you are only allowed to talk to them. If you are lucky and the billionares that own everything let you...
Don't be so quick to point at old people and make assumptions. Sometimes all those years actually translate into useful experience :)
Possibly. The focus of a lot of young people should be to try and effect political change that allows billionares wealth grow unended. AI is all going to accelerate this very rapidly now. Just look at what kind of world some of those with the most wealth are wanting to impose on the others now. It's frightening.
You see it obviously with the artists and image/video generators too.
We did this with Dadaism and Impressionism and photography before this too with art.
Ultimately, it's just more abstraction that we have to get used to -- art is stuff people create with their human expression.
It is funny to see everyone argue so vehemently without any interest in the same arguments that happened in the past.
Exit through the giftshop is a good movie that explores that topic too, though with near-plagiarized mass production, not LLMs, but I guess that's pretty similar too!
https://daily.jstor.org/when-photography-was-not-art/
https://www.youtube.com/watch?v=IqVXThss1z4
https://en.wikipedia.org/wiki/Dada
Allow me to repeat myself: AI is for idiots.
The early Industrial Revolution that the original Luddites objected to resulted in horrible working conditions and a power shift from artisans to factory workers.
Dadism was a reaction to WWI where the aristocracy's greed and petty squabbling led to 17 million deaths.
Quite a few come to mind: chemical and biological weapons, beanie babies, NFTs, garbage pail kids... Some take real effort to eradicate, some die out when people get bored and move on.
Today's version of "AI," i.e. large language models for emitting code, is on the level of fast fashion. It's novel and surprising that you can get a shirt for $5, then you realize that it's made in a sweatshop, and it falls apart after a few washings. There will always be a market for low-quality clothes, but they aren't "disrupting non-nudity."
So are beanie babies, NFTs and garbage pail kids -- Things that have fallen out of fashion isn't the same thing as eradicating a technology. I think that's part of the difficulty, how could you roll back knowledge without some Khmer Rouge generational trauma?
I think about the original use of steam engines and the industrial revolution -- Steam engines were so inefficient, their use didn't make sense outside of pulling its own fuel out of the ground -- Many people said haha look how silly and inefficient this robot labor is. We can see how that all turned out.[2]
1: https://www.armscontrol.org/factsheets/timeline-syrian-chemi...
2: https://en.wikipedia.org/wiki/Newcomen_atmospheric_engine
That's true. Ruby still exists, for example, though it's sitting down below COBOL on the Tiobe index. There's probably a community trading garbage pail kids on Facebook Marketplace as well. Ideas rarely die completely.
Burning fossil fuels to turn heat into kinetic energy is genuinely better than using draft animals or human slaves. Creating worse code (or worse clothing) for less money is a tradeoff that only works for some situations.
I feel like “Luddite” is a misunderstood term.
https://en.wikipedia.org/wiki/Luddite
> Malcolm L. Thomas argued in his 1970 history The Luddites that machine-breaking was one of the very few tactics that workers could use to increase pressure on employers, undermine lower-paid competing workers, and create solidarity among workers. "These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made." [emph. added] Historian Eric Hobsbawm has called their machine wrecking "collective bargaining by riot", which had been a tactic used in Britain since the Restoration because manufactories were scattered throughout the country, and that made it impractical to hold large-scale strikes. An agricultural variant of Luddism occurred during the widespread Swing Riots of 1830 in southern and eastern England, centring on breaking threshing machines.
Luddites were closer to “class struggle by other means” than “identity politics.”
I’m wondering if we can do something better…
Many of the people I've encountered who are most staunchly anti-AI are hobbyists. They enjoy programming in their spare time and they got into software as a career because of that. If AI can now adequately perform the enjoyable part of the job in 90% of cases, then what's left for them?
I use ChatGPT/Claude/Gemini daily. My opinion hasn't evolved over the last 6 months:
- Huge productivity leverage, but the risk is confident, subtle wrongness, so you need to be vigilant in reviewing LLM output when using.
- Great for learning if you’re motivated; if not, it becomes reasoning outsourcing and skill atrophy.
Sure if you want to learn programming languages for programming sake, then yeah don't Vibe Code (i.e. text prompting AI to code), use AI as a knowledgeable companion that's readily on hand to help you whenever you get stuck. But if your goal is to create Software that achieves your objectives then you're doing yourself a disservice if you're not using AI to its maximum potential.
Given my time on this earth is finite, I'm in the camp of using AI to be as productive as possible. But that's still not everything yet, I'm not using it for backend code as I need to verify every change. But more than happy to Vibe code UIs (after I spend time laying down a foundation to make it intuitive where new components/pages go and API integration).
Other than that I'll use AI where I can (UIs, automation & deployment scripts, etc), I've even switched over to using React/Next.js for new Apps because AI is more proficient with it. Even old Apps that I wouldn't normally touch because it used legacy tech that's deprecated, I'll just rewrite the entire UI in React/Next.js to get it to a place where I can use text prompts to add new features. It took about ~20mins for Claude Code to get the initial rewrite implemented (using the old code base as a guide) then a few hours over that to walk through every feature and prompt it to add features it missed or fix broken functionality [1]. I ended up spending more time migrating it from AWS/ECS/RDS to Hetzner w/ automated backups - then the actual rewrite.
[1] https://react-templates.net/docs/vibe-coding/rewrite-legacy-...
I don’t know why this is so controversial it’s just a tool, you should learn to use it otherwise as the author of this post said you will get left behind but don’t cut yourself on the new tool (lots of people are doing this).
I personally love it because it allows me to create personal tools on the side that I just wouldn’t have had time for in the past. The quality doesn’t matter so much for my personal projects and I am so much more effective with the additional tools I’m able to create.
Do you really "don't know why"? Are you sure?
I believe that ignoring the consequences that commercial LLMs are having on the general public today is just as radical as being totally opposed to them. I can at least understand the ethical concerns, but being completely unaware of the debate on artificial intelligence at this stage is really something that leaves me speechless, let me tell you.
I'm not going to speculate about the intent and goals of this post and blog, just note that a small sample of posts I've read trigger disagreement for every statement at least for me.
Top comments here also express strong disagreement with multiple statements in the post.
sorry you’re so angry though. best of luck
it does seem like the skepticism is fading. I do think engineers that outright refuse to use AI (typically on some odd moral principle) are in for a bad time
I’m an early adopter and nowadays all I do is to co-write context documents so that my assistant can generate the code I need
AI gives you an approximated answer, it depends on you how to steer it to a good enough answer and this takes time and learning curve … and evolves really fast
Some people are just not good at constantly learning things
Many programmers work on problems (nearly) *all day* where AI does not work well.
> AI gives you an approximated answer, it depends on you how to steer it to a good enough answer
Many programmers work on problem where correctness is of essential importance, i.e. if a code block is "semi-right" it is of no use - and even having to deal with code blocks where you cannot trust that the respective programmer did think deeply about such questions is a huge time sink.
> Some people are just not good at constantly learning things
Rather: some people are just not good at constantly looking beyond their programming bubble where AI might have some use.
it’s not bullying in that it’s more entertaining than insulting, but still
ah in another comment (I am enjoying reading these):
> Ruthlessly bully LLM idiots
quite openly advocating for “bullying” people that use the bad scary neural nets!
e.g. had an issue with connecting to AWS S3, gave Claude some of the code to connect and it diagnosed a CREDENTIALS issue without seeing the credentials file nor seeing the error itself. It can even find issues like "oh, you have an extra space in front of the build parameter that the user passed into a Jenkins job". Something that a human might have found in 30+ minutes of grepping, checking etc it found in <30 seconds.
It also makes it trivial to do things like "hey, convert all of the print statements in this python script to log messages with ISO 8601 time format".
Folks talk about "but it adds bugs" but I'm going to make the opposite argument:
The excuse of "we don't have time to make this better" is effectively gone. Quality code that is well instrumented, has good metrics and easy to parse logs is only a few prompts away. Now, one could argue that was the case BEFORE we had AI/LLMs and it STILL didn't happen so I'm going to assume folks that can do clean up (SRE/DevOps/code refactor specialists) are still going to be around.
10 years ago google would have had a forum post describing your exact problem with solutions within the first 5 results.
Today google delivers 3 pages of content farm spam with basic tutorials, 30% of them vaguely related to your problem, 70% just containing "aws" somewhere, then stops delivering results.
The LLM is just fixing search for you.
Edit: and by the way, it can fix search for you just because somewhere out there there are forum posts describing your exact problem.
I have yet to see any evidence of the third case. I'm close to banning AI for my junior devs. Their code quality is atrocious. I don't have time for all that cleanup. Write it good the first time around.
'How do I do [X thing I need to accomplish in codebase]?'
"Here is here how you would do that."
Then I apply the code and it's broken and doesn't work.
'The code you supplied is broken because of [xyz], can you fix it?'
"Of course, you're so right! My apologies. Here is the corrected code."
It still doesn't work. Repeat this about 2-3 more times and throw in me deleting the entire chat window and context and trying again in a new chat with slightly different phrasing of question to maybe get it to output the right answer this time, which it often doesn't.
I hate it. It's awful and terrible. Fundamentally, this technology will never not be terrible, because it is just predicting text tokens based off of probabilistic statistics.
"She wrote a thing in a day that would have taken me a month"
This scares me. A lot.
I never found the coding part to be a bottle neck, but the issues arise after the damn thing is in prod. If i work on something big (that will take me a month) thats going to be anywhere from (im winging these numbers) 10K LOC to 25K LOC).
If thats a bechmark for me the next guy using AI will spew out at a bare minimun double the amount of code, and in many cases 3x-4x.
The surface area for bugs are just vastly bigger, and fixing these bugs will eventually take more time than you "won" using AI in the first place.
I tried AI, but the code it produces (on a higher level) is of really poor quality. Refactoring this is a HUGE PITA.
I keep seeing this over and over by so called "engineers".
You can dismiss the current crop of transformers without dismissing the wider AI category. To me this is like saying that users "dismiss Computers" because they dismiss Windows and instead prefer Linux. Rejecting modern practices for not getting on the microservice hype train or not using React.
Intellisense pre-GPT is a good example of AI that wasn't using transformers.
And of course, you can have both criticise some usages of transformers in IDEs and editors while appreciating and using others.
"My coworker uses Claude Code now. She finished a project last week that would’ve taken me a month". This is one of those generalisations. There is no nuance here. The range of usage from boilerplate to vibe code level is vast. Quickly churning out code is not a virtue. It is not impressive to ship something only to find critical bugs on the first day. Nor is it a virtue using it at the cost of losing understanding of the codebase.
This rigid thinking by devs needs to stop imo. For so called rational thinkers, the development world is rife with dogma and simplistic binary thinking.
If using transformers at any level is cost-effective for all, the data will speak for itself. Vague statements and broad generalisations are not going to sway anyone and will just make this kind of articles sound like validation seeking behaviour.
I'm not even sure there is much room left for one.
There is very little alignment in starting assumptions between most parties in this convo. One guy is coding mission critical stuff, the other is doing throw away projects. One guy depends on coding to put food on table, the other does not. One guy wants to understand every LoC, other is happy to vibe code. One is a junior looking for first job, other is in management in google after being promoted out of engineering. One guy has access to $200/m tech, the other does not. etc etc
We can't even get consensus on tab vs spaces...we're not going to get AI & coding down to consensus or who is "right".
Perhaps a bit a nihilistic & jaded, but I'm very much leaning towards "place your bets & may the odds be ever in your favour".
For what it’s worth, I’m fine with “falling behind.” I didn’t want to be a manager when I was at FAANG, and I don’t want to be an AI puppetmaster now. I got into this field because I actually love programming, and I’ll just keep on doing that while the world gets driven mad by their newfound corporate oracle, thanks. Feel free to lap me with your inscrutable 10,000-line PRs and enjoy your technical debt.
I am one of those dismissers. I am constantly trash talking AI. Also, I have tried more tools and more stress scenarios than a lot of enthusiasts. The high bars are not in my head, they are on my repositories.
Talk is cheap. Show me your AI generated code. Talk tech, not drama.
Do I use it? Yes, a lot, actually. But I also spend a lot of tunning prunning its overly verbose and bizantine code, my esc key is fading from the amount of times I've interrupted it to steer it towards a non-idiotic direction.
It is useful, but if you trust it too much, you're creating a mountain of technical debt.
Yet programmers who have used Nano were not (at least not significantly) scoffed at or ridiculed. It was their choice of tool, and they were getting work done.
It seems unclear how much more productive AI coding tools can make a programmer; some people claim 10x, some claim it actually makes you slower. But let us suppose it is on average the same 2x productivity increase as Vim.
Why then was using Vim not heralded from every rooftop the same as using AI?
I'm so tired of this kind of reference to Stack Overflow. I used SO for about 15 years, and still visit plenty these days.
I rarely, if ever, copied from Stack Overflow. But I sure learned a great deal from SO.
LLM-assisted coding feels like the next step in that same pattern. The difference is that this abstraction layer can confidently make stuff up: hallucinated APIs, wrong assumptions, edge cases it didn’t consider. So the work doesn’t disappear, it shifts. The valuable skill becomes guiding it: specifying the task clearly, constraining the solution, reviewing diffs, insisting on tests, and catching the “looks right but isn’t” failures. In practice it’s like having a very fast junior dev who never gets tired and also never says “I’m not sure”.
That’s why I don’t buy the extremes on either side. It’s not magic, and it’s not useless. Used carelessly, it absolutely accelerates tech debt and produces bloated code. Used well, it can take a lot of the grunt work off your plate (refactors, migrations, scaffolding tests, boilerplate, docs drafts) and leave you with more time for the parts that actually require engineering judgement.
On the “will it make me dumber” worry: only if you outsource judgement. If you treat it as a typing/lookup/refactor accelerator and keep ownership of architecture, correctness, and debugging, you’re not getting worse—you’re just moving your attention up the stack. And if you really care about maintaining raw coding chops, you can do what we already do in other areas: occasionally turn it off and do reps, the same way people still practice mental math even though Excel exists.
Privacy/ethics are real concerns, but that’s a separate discussion; there are mitigations and alternatives depending on your threat model.
At the end of the day, the job title might stay “software engineer”, but the day-to-day shifts toward “AI guide + reviewer + responsible adult.” And like every other tooling jump, you don’t have to love it, but you probably do have to learn it—because you’ll end up maintaining and reviewing AI-shaped code either way.
Basically, I think the author hit just in the point.
But it's really a mixed bag, because for the subsequent 3-4 tasks in a codebase that I was familiar with, Claude managed to produce over-commented, over-engineered slop that didn't do what I asked for and took shortcuts in implementing the requirements.
I definitely wouldn't dismiss AI at this point because it occasionally astounds me and does things I would never in my life have imagined possible. But at other times, it's still like an ignorant new junior developer. Check back again in 6 months I guess.
Let‘s wait with the evaluation until the honeymoon phase is over. At the moment there are plenty of companies that offer cheap AI tools. It will not stay that way. At the moment most of their training data is man made and not AI made which makes AIs worse if used for training. It will not stay that way.
Even without AI you cannot do a tight 10 minute video on legacy code unless you have done a lot of work ahead of time to map it out and then what’s the point
> [Claude Code and Cursor] can now work across entire codebases, understand project context, refactor multiple files at once, and iterate until it’s really done.
But I haven’t seen anyone doing this on e.g. YouTube? Maybe that kind of content isn’t easy to monetize, but if it’s as easy to use AI as everyone says surely someone would try.
Yeah, 18 months ago we were apparently going to have personal SaaSes and all sorts of new software - I don't see anything but an even more unstable web than ever before
I've done this many times over, and it's by far one of the least impressive things I've seen CC achieve with a good agent/skills/collab setup.
Although, I also made it create Rust and Go bindings. Two languages I don't really know that well. Or, at least not well enough for that kind of start-to-finish result.
Another commenter wrote a really interesting question: How do you not degrade your abilities? I have to say that I still had to spend days figuring out really hard problems. Who knew that 64-bit MinGW has a different struct layout for gettimeofday than 64-bit Linux? It's not that it's not obvious in hindsight, but it took me a really long time to figure out that was the issue, when all I have to go on is something that looks like incorrect instruction emulation. I must have read the LoongArch manual up and down several times and gone through instructions one by one, disabling everything I could think of, before finally landing on the culprit just being a mis-emulated kind-of legacy system call that tells you the time. ... and if the LLM had found this issue for me, I would have been very happy about it.
There are still unknowns that LLMs cannot help with, like running Golang programs inside the emulator. Golang has a complex run-time that uses signal-based preemption (sysmon) and threads and many other things, which I do emulate, but there is still something missing to pass all the way through to main() even for a simple Hello World. Who knows if it's the ucontext that signals can pass or something with threads or per-state signal state. Progression will require reading the Go system libraries (which are plain source code), the assembly for the given architecture (LA64), and perhaps instrumenting it so that I can see what's going wrong. Another route could be implementing an RSP server for remote GDB via a simple TCP socket.
As a conclusion, I will say that I can only remember twice I ditched everything the LLM did and just did it myself from scratch. It's bound to happen, as programming is an opinionated art. But I've used it a lot just to see what it can dream up, and it has occasionally impressed. Other times I'm in disbelief as it mishandles simple things like preventing an extra masking operation by moving something signed into the top bits so that extracting it is a single shift, while sharing space with something else in the lower bits. Overall, I feel like I've spent more time thinking about more high-level things (and occasionally low-level optimizations).
I don’t think I will. I am glad I have made the radical decision, for myself, to wilfully remain strict in my stance against generative AI, especially for coding. It doesn’t have to be rational, there is good in believing in something and taking it to its extreme. Some avoid proprietary software, others avoid eating sentient beings, I avoid generative AI on pure principle.
This way I don’t have to suffer from these articles that want to make you feel bad, and become almost pleading, “please use AI, it’s good now, I promise” which I find frankly pathetic. Why do people care so much about it to have to convince others in this sad routine? It honestly feels like some kind of inferiority complex, as if it is so unbearable that other people might dislike your favourite tool, that you desperately need them to reconsider.
AI is a tool and it shuold be treated as such.
Also, beware of snake oil salesmen. Is AI going to integrate widely into the world? Yes. Is it also going to destroy all the jobs in the world? Of course not, luddites don't understand the naïvety of this position.
And even if LLMs turn out to really be a net positive and a requirement for the job, they're antithetical to what most software developers appreciate and enjoy (precision, control, predictability, efficiency...).
There sure seems to be two kinds of software developers: those who enjoy the practice and those who're mostly in for the pay. If LLMs win it will be second ones who'll stay on the job, and that's fine; it won't mean that the first group was made of luddites, but that the job has turned into crap that others will take over.
Do you really think that Software Engineering is going to be less about precision, control, predictability, and efficiency? These are fundamental skills regardless of AI.
It's very clearly getting better and better rapidly. I don't think this train is stopping even if this bubble bursts.
The cold ass reality is: We're going to need a lot less software engineers moving forward. Just like agriculture now needs way less humans to do the same work than in the past.
I hate to be blunt but if you're in the bottom half of the developer skill bell curve, you're cooked.
Responsible use of ai means reading lots and lots of generated code, understanding it, reviewing and auditing it, not "vibe coding" for the purpose of avoiding ever reading any code.
I do like to read other people's code if it is of an exceptional high standard. But otherwise I am very vocal in criticizing it.
The Strange Case of "Engineers" Who Use AI
I rely on AI coding tools. I don’t need to think about it to know they’re great. I have instincts which tell me convenience = dopamine = joy.
I tested ChatGPT in 2022, and asked it to write something. It (obviously) got some things wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I've forgotten that lesson. Why wouldn't I? I've been offloading all sorts of meaningful cognitive processes to AI tools since then.
I use Claude Code now. I finished a project last week that would’ve taken me a month. My senior coworker took one look at it and found 3 major flaws. QA gave it a try and discovered bugs, missing features, and one case of catastrophic data loss. I call that “nitpicking.” They say I don’t understand the engineering mindset or the sense of responsibility over what we build. (I told them it produces identical results and they said I'm just admitting I can't tell the difference between skill and scam).
“The code people write is always unfinished,” I always say. Unlike AI code, which is full of boilerplate, adjusted to satisfy the next whim even faster, and generated by the pound.
I never look at Stack Overflow anymore, it's dead. Instead I want the info to be remixed and scrubbed of all its salient details, and have an AI hallucinate the blanks. Thay way I can say that "I built this" without feeling like a fraud or a faker. The distinction is clear (well, at least in my head).
Will I ever be good enough to code by myself again? No. When a machine showed up that told me flattering lies while sounding like a silicon valley board room after a pile of cocaine, I jumped in without a parachute [rocket emoji].
I also personally started to look down on anyone who didn't do the same, for threatening my sense of competence.
The biggest problem with LLMs for code that is ongoing is that they have no ability to express low confidence in solutions where they don't really have an answer, instead they just hallucinate things. Claude will write ten great bash lines for you but then on the eleventh it will completely hallucinate an option on some linux utility you hardly have time to care about, where the correct answer is "these tools don't actually do that and I dont have an easy answer for how you could do that". At this point I am very keen to notice when Claude gets itself into an endless ongoing loop of thought that I'm going about something the wrong way. Someone less experienced would have a very hard time recognizing the difference.
This is plainly true, and you are just angry that you don't have a rebuttal
Missing in these discussions is what kinds of code people are talking about. Clearly if we're talking about a dense, highly mathematical algorithm, I would not have an LLM anywhere near that. We are talking about day-to-day boilerplate / plumbing stuff. The vast majority of boring grunt work that is not intellectually stimulating. If your job is all Carnegie-Mellon level PHD algorithm work, then good for you.
edit: I get that it looks like you made this account four days ago to troll HN on AI stuff. I get it, I have a bit of a mission here to pointedly oppose the entrenched culture (namely the extreme right wing elements of it). But your trolling is careless and repetitive enough that it looks like.....is this an LLM account instructed to troll HN users on LLM use ? funny