I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
The problem is that so many things have been monopolized or oligopolized by equally-mediocre actors so that quality ultimately no longer matters because it's not like people have any options.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".
You are right. We are seeing a transition from the user as a customer to the user as a resource. It's almost like a cartel of shitty treatment.
I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.
Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.
> We are seeing a transition from the user as a customer to the user as a resource.
I'd argue that this started 30 years ago when automated phone trees started replacing the first line of workers and making users figure out how to navigate where they needed to in order to get the service they needed.
I can't remember if chat bots or "knowledge bases" came first, but that was the next step in the "figure it out yourself" attitude corporations adopted (under the guise of empowering users to "self help").
Then we started letting corporations use the "we're just too big to actually have humans deal with things" excuse (eg online moderation, or paid services with basically no support).
And all these companies look at each other to see who can lower the bar next and jump on the bandwagon.
It's one of my "favorite" rants, I guess.
The way I see this next era going is that it's basically going to become exclusively the users' responsibility to figure out how to talk to the bots to solve any issue they have.
> You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
Well said. I try to capture and express this same sentiment to others through the following expression:
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
The hard part is the slow, human work of noticing confusion, earning trust, asking the right follow-up questions, and realizing that what users say they need and what they actually struggle with are often different things
Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
> Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
See also: librarians, archivists, historians, film critics, doctors, lawyers, docents. The déformation professionnelle of our industry is to see the world in terms of information storage, processing, and retrieval. For these fields and many others, this is like confusing a nailgun for a roofer. It misses the essence of the work.
I like the cut o' your jib. The local public transit guide you write, is that for work or for your own knowledge base? I'm curious how you're organizing this while keeping the human touch.
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
Spot on! I think LLM's can help greatly in quickly putting that knowledge in writing, including using it to review written materials for hidden prerequisite assumptions that readers might not be aware of that. It can also help newer hires in how to write and more clearly. LLM's are clearly useful in increasing productivity, but management that think that they even close to ready to replace large sections of practically any workforce are delusional.
I don't write for a living, but I do consider communication / communicating a hobby of sorts. My observations - that perhaps you can confirm or refute - are:
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you said, it's what they hear."
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
Funnily, of all your comment, the only word I objected to was the one right before "insulting": "almost". Thinking that LLM can replace humans outright expresses hubris and disdain in a way that I find particularly aggravating.
…says every charlatan who wanted to keep their position. I’m not saying you’re a charlatan but you are likely overestimating your own contributions at work. Your comment about feeding on data - AI can read faster than you can by orders of magnitude. You cannot compete.
"you are likely overestimating your own contributions at work"
Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?
Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.
You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.
Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translated it into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value.
An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then on the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner making it hierarchically equivalent to your model kung-fu accomplishments.
So in the end, you end up with some bizarre stuff that looks like:
"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"
The best tech writers I have worked with don’t merely document the product. They act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well.
In my experience, great tech writers quietly function as a kind of usability radar. They're often the first people to notice that a workflow is confusing
> I don’t see AI doing either of those things well.
I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.
Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.
A good tech writer knows why something matters in context: who is using this under time pressure, what they're afraid of breaking, what happens if they get it wrong
I don’t know that it can be well-defined. It might be asking something akin to “What makes something human?” For usability, one needs a sense of what defines “user pain” and what defines “reasonableness.” No product is perfect. They all have usability problems at some level. The best usability experts, and tech writers who do this well, have an intuition for user priorities and an ability to identify and differentiate large usability problems from small ones.
Thinking about this some more now, I can imagine a future in which we'll see more and more software for which AI agents are the main users.
For tech documentation, I suppose that AI agents would mainly benefit from Skills files managed as part of the tool's repo, and I absolutely do imagine future AI agents being set up (e.g. as part of their AGENTS.md) to propose PRs to these Skills as they use the tools. And I'm wondering whether AI agents might end up with different usability concerns and pain-points from those that we have.
Current AI writing is slightly incoherent. It's subtle, but the high level flow/direction of the writing meanders so things will sometimes seem a bit non-sequitur or contradictory.
It has no sense of truth or value. You need to check what it wrote and you need to tell it what’s important to a human. It’ll give you the average, but misses the insight.
> but can't quite put my finger on what exactly it's missing.
We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.
I remember the days when every large concern employed technical writers and didn't expect us programmers and engineers to write for the end users. But that stopped decades ago in most places at least as far as in house applications are concerned, long before AI could be used as an excuse for firing technical writers.
Yeah. AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement. The companies with the best docs will absolutely still have tech writers, just with some AI assistance.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
The failure mode isn't just hallucinations, it's the absence of judgment: what not to document, what to warn about, what's still unstable, what users will actually misunderstand
The best tech writers I've known have been more like anthropologists, bridging communication between product management, engineers, and users. With this perspective they often give feedback that makes the product better.
AI can help with synthesis once those insights exist, but it doesn't naturally occupy that liminal space between groups, or sense the cultural and organizational gaps
And here I am, 2026, and one of my purposes for this year is to learn to write better, communicate more fluently, and convey my ideas in a more attractive way.
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
It is such a challenge! As English is not my first language I have to do some mind gimnastics to really convey my thoughts. 'On writing well' is on my list to read, it is supposed to help.
Nice read after the earlier post saying fire all your tech writers. Good post.
One thing to add is that the LLM doesn't know what it can't see. It just amplifies what is there. Assumed knowledge is quite common with developers and their own code. Or the more common "it works on my machine" because something is set outside of the code environment.
Sadly other fields are experiencing the same issue of someone outside their field saying AI can straight up replace them.
I have not fired a technical writer, but writing documentation that understands and maintains users focus is hard even with llm. I am trying to write documentation for my start up and it is harder than I expected even with llm.
Kudos to all technical writer who made my job as software engineer easier.
I suspect a lot of folks are asking ChatGPT to summarize it…
I can’t imagine just letting an LLM write an app, server, or documentation package, wholesale and unsupervised, but have found them to be extremely helpful in editing and writing portions of a whole.
The one thing that could be a light in the darkness, is that publishers have already fired all their editors (nothing to do with AI), and the writing out there shows it. This means there’s the possibility that AI could bring back editing.
Is it expected that LLMs will continue to improve over time? All the recent articles like this one just seem to describe this technology's faults as fixed and permanent. Basically saying "turn around and go no further". Honestly asking because their arguments seem to be dependent on improvement never happening and never overcoming any faults. It feels shortsighted.
Someone has to turn off their brain completely and just follow the instructions as-is. Then log the locations where the documentation wasn't clear enough or assumed some knowledge that wasn't given in the docs.
If the business can no longer justify 5 engineers, then they might only have 1.
I've always said that we won't need fewer software developers with AI. It's just that each company will require fewer developers but there will be more companies.
IE:
2022: 100 companies employ 10,000 engineers
2026: 1000 companies employ 10,000 engineers
The net result is the same for emplyoment. But because AI makes it that much more efficient, many businesses that weren't financially viable when it needed 100 engineers might become viable with 10 engineers + AI.
The person you're replying to is obviously and explicitly aware that that is another scenario, and the whole point of their comment was to argue against it and explain why they think something else is more likely. Merely restating the thing they were already arguing against adds nothing to the discussion.
Five engineers could be turned into maybe two, but probably not less.
It's the 'bus factor' at play. If you still want human approvals on pull requests then If one of those engineers goes on vacation or leaves the company you're stuck with one engineer for a while.
If both leave then you're screwed.
If you're a small startup, then sure there are no rules and it's the wild west. One dev can run the world.
This was true even before LLMs. Development has always scaled very poorly with team size. A team of 20 heads is like at most twice as productive as a team of 5, and a team of 5 is marginally more productive than a team of 3.
Peak productivity has always been somewhere between 1-3 people, though if any one of those people can't or won't continue working for one reason or another, it's generally game over for the project. So you hire more.
This is why small software startups time and time again manage to run circles around with organizations with much larger budgets. A 10 person game studio like Team Cherry can release smash hit after smash hit, while Ubisoft with 170,000% the personnel count visibly flounders. Imagine doing that in hardware, like if you could just grab some buddies and start a business successfully competing with TSMC out of your garage. That's clearly not possible. But in software, it actually is.
The tech writer backlog is probably worse, because writing good documentation requires extensive experience with the software you're writing documentation about and there are four types of documentation you need to produce.
Yes. I have been building software and acting as tech lead for close to 30 years.
I am not even quite sure I know how to manage a team of more than two programmers right now. Opus 4.5, in the hands of someone who knows what they are doing, can develop software almost as fast as I can write specs and review code. And it's just plain better at writing code than 60% of my graduating class was back in the day. I have banned at least one person from ever writing a commit message or pull request again, because Claude will explain it better.
Now, most people don't know to squeeze that much productivity out of it, most corporate procurement would take 9 months to buy a bucket if it was raining money outside, and it's possible to turn your code into unmaintainable slop at warp speed. And Claude is better at writing code than it is at almost anything else, so the rest of y'all are safe for a while.
But if you think that tech writers, or translators, or software developers are the only people who are going to get hit by waves of downsizing, then you're not paying attention.
Even if the underlying AI tech stalls out hard and permanently in 2026, there's a wave of change coming, and we are not ready. Nothing in our society, economy or politics is ready to deal with what's coming. And that scares me a bit these days.
"And it's just plain better at writing code than 60% of my graduating class was back in the day".
Only because it has access to vast amount of sample code to draw a re-combine parts. Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?
I'm starting to think about a risk of technological stagnation in many areas.
I will share my experience, hopefully it answers some questions to tech writers.
I was terrible writer, but we had to write good docs and make it easy for our customers to integrate with our products. So, I prepared the context to our tech writers and they have created nice documentation pages.
The cycle was (reasonably takes 1 week, depending on tech writer workload):
1. prepare context
2. create ticket to tech writers, wait until they respond
3. discuss messaging over the call
4. couple days later I get first draft
5. iterate on draft, then finally publish it
Today its different:
1. I prepare all the context and style guide, then feed them into LLM.
1.1. context is extracted directly from code by coding agents
2. I proofread it and 97% of cases accept it, because it follows the style guide and mostly transforms my context correctly into customer consumable content
3. Done. less than 20 minutes
Tech writers were doing amazing job of course, but I can get 90-95% quality in 1% of the time spend for that work.
First, we've fallen into a nomenclature trap, as so-called "AI" has nothing to do with "intelligence." Even its creators admit this, hence the name "AGI," since the appropriate acronym has already been used.
But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.
Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.
This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."
Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.
It’s not so much that AI is replacing “tech writers”; with all due respect to the individuals in those roles, it was never a good title to identify as.
Technical writing is part of the job of software engineering. Just like “tester” or “DBA”, it was always going to go the way of the dodo.
If you’re a technical writer, now’s the time to reinvent yourself.
The specialisations will always exist. A good software engineer can't replace a good tester, DBA, or writer. There are specific extra skills necessary for those roles. We may not need those full skills in every environment (most companies will be just fine without a DBA), but they sure are not going away globally.
You're going to get some text out of a typical engineer, but the writing quality, flow, and fit for the given purpose is not going to come close to someone who does it every day.
> Technical writing is part of the job of software engineering.
Where I work we have professional technical writers and the quality vs your typical SW engineer is night and day. Maybe you got lucky with the rare SW engineer that can technical write.
Why should I hire a dedicated writer if I have people with better understanding of the system? Also worth noting that like in any profession the most writers are... mediocre. Especially when you hire someone on contract. I had mostly bad experience with them in past. They happily charge $1000 for a few pages of garbage that is not even LLM-quality. No creativity, just pumping out words.
I can chip in like $20 to pay some "good writer" that "observes, listens and understands" for writing documentation on something and compare it with LLM-made one.
"Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
Meh. A bit too touchy feely for my taste, and not much in ways of good arguments. Some of the things touched on in the article are either extreme romanticisations of the craft or rather naive takes (docs are product truth? Really?!?! That hasn't been the case in ages, with docs for multi-billion dollar solutions, written by highly paid grass fed you won't believe they're not humans!)...
The parts about hallucinations and processes are also a bit dated. We're either at, or very close to the point where "agentic" stuff works in a "GAN" kind of way to "produce docs" -> read docs and try to reproduce -> resolve conflicts -> loop back, that will "solve" both hallucinations and processes, at least at the quality of human-written docs. My bet is actually better in some places. Bitter lesson and all that. (at least for 80% of projects, where current human written docs are horrendous. ymmv. artisan projects not included)
What I do agree with is that you'll still want someone to hold accountable. But that's just normal business. This has been the case for integrators / 3rd party providers since forever. Every project requiring 3rd party people still had internal folks that were held accountable when things didn't work out. But, you probably won't need 10 people writing docs. You can hold accountable the few that remain.
I love AI and use it daily, but I still run into hallucinations, even in COT/Thinking. I don't think hallucinations are as bad as people make it out to be. But I've been using AI since GPT3, so I'm hyper aware.
Yea. I think people underestimate this. Yesterday I was writing an obsidian plugin using the latest and most powerful Gemini model and I wanted it to make use of the new keychain in Obsidian to retrieve values for my plugin. Despite reading the docs first upon my request it still used a non existent method (retrieveSecret) to get the individual secret value. When it ran into an error, instead of checking its assumptions it assumed that the method wasnt defined in the interface so it wrote an obsidian.shim.ts file that defined a retrieveSecret interface. The plug-in compiled but obviously failed because no implementation of that method exists. When it understood it was supposed to used getSecret instead it ended up updating the shim instead of getting rid of it entirely. Add that up over 1000s of sessions/changes (like the one cursor has shared on letting the agent run until it generated 3M LOC for a browser) and it's likely that code based will be polluted with tiny papercuts stemming from LLM hallucinations
With every job replaced by AI the best people will be doing a better job than the AI and it'll be very frustrating to be replaced by people that can't tell the difference.
There's another HN thread specifically asking people for links to their personal websites. I suspect an accidental typing-in-the-wrong-reply-box issue.
are you talking about the hashes (##, ###) etc in the subheadings? I think that's an intentional design thing, a bit of a nod to the back row, if you will.
I don't think I've ever seen documentation from tech writers that was worth reading: if a tech writer can read code and understand it, why are they making half or less of what they would as an engineer? The post complains about AI making things up in subtle ways, but I've seen exactly the same thing happen with tech writers hired to document code: they documented what they thought should happen instead of what actually happened.
There are plenty of people who can read code who don't work as devs. You could ask the same about testers, ops, sysadmins, technical support, some of the more technical product managers etc. These roles all have value, and there are people who enjoy them.
Worth noting that the blog post isn't just about documenting code. There's a LOT more to tech writing than just that niche. I still remember the guy whose job was writing user manuals for large ship controls, as a particularly interesting example of where the profession can take you.
Yeah, but almost everyone wants money. You can see this by looking at what projects have the best documentation: they're all things like the man-pages project where the contributors aren't doing it as a job when they could be working a more profitable profession instead.
While I do appreciate man pages, I don't think they are something I would consider to be "the best documentation". Many of the authors of them are engineers, by the way.
A tech writer isn't a class of person. "Tech writer" is a role or assignment. You can be an engineer working as a tech writer.
Also, the primary task of a tech writer isn't to document code. They're supposed to write tutorials, user guides, how to guides, explanations, manuals, books, etc.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".
I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.
Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.
I'd argue that this started 30 years ago when automated phone trees started replacing the first line of workers and making users figure out how to navigate where they needed to in order to get the service they needed.
I can't remember if chat bots or "knowledge bases" came first, but that was the next step in the "figure it out yourself" attitude corporations adopted (under the guise of empowering users to "self help").
Then we started letting corporations use the "we're just too big to actually have humans deal with things" excuse (eg online moderation, or paid services with basically no support).
And all these companies look at each other to see who can lower the bar next and jump on the bandwagon.
It's one of my "favorite" rants, I guess.
The way I see this next era going is that it's basically going to become exclusively the users' responsibility to figure out how to talk to the bots to solve any issue they have.
Thank you. I love it when someone poetically captures a feeling I’ve been having so succinctly.
I have exactly 1 guess but am waiting to say it.
It’s almost like they’re a professional writer…
Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
You know, just like the human it'd replace.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
Nicely written (which, I guess, is sort of the point).
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you said, it's what they hear."
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
It becomes: This person is fearful of their job and used feeling to justify their belief.
Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?
Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.
You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.
Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translated it into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value.
An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then on the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner making it hierarchically equivalent to your model kung-fu accomplishments.
So in the end, you end up with some bizarre stuff that looks like:
"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"
True, but it raises another question, what were your Product Managers doing in the first place if tech writer is finding out about usability problems
I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.
Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.
For tech documentation, I suppose that AI agents would mainly benefit from Skills files managed as part of the tool's repo, and I absolutely do imagine future AI agents being set up (e.g. as part of their AGENTS.md) to propose PRs to these Skills as they use the tools. And I'm wondering whether AI agents might end up with different usability concerns and pain-points from those that we have.
We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
[insert Pawn Stars meme]: "GOOD docs? Sorry, best I can do is 'slightly better than useless.'"
See my other comment - I'm afraid quality only matters if there is healthy competition which isn't the case for many verticals: https://news.ycombinator.com/item?id=46631038
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
Yep, and reading you will feel less boring.
The uniform style of LLMs gets old fast and I wouldn't be surprised if it were a fundamental flaw due to how they work.
And it's not even sure speed gains from using LLMs make up for the skill loss in the long term.
<list of emoji-labeled bold headers of numbered lists in format <<bolded category> - description>>
Is there anything else I can help you with?
One thing to add is that the LLM doesn't know what it can't see. It just amplifies what is there. Assumed knowledge is quite common with developers and their own code. Or the more common "it works on my machine" because something is set outside of the code environment.
Sadly other fields are experiencing the same issue of someone outside their field saying AI can straight up replace them.
What post was that?
Kudos to all technical writer who made my job as software engineer easier.
I suspect a lot of folks are asking ChatGPT to summarize it…
I can’t imagine just letting an LLM write an app, server, or documentation package, wholesale and unsupervised, but have found them to be extremely helpful in editing and writing portions of a whole.
The one thing that could be a light in the darkness, is that publishers have already fired all their editors (nothing to do with AI), and the writing out there shows it. This means there’s the possibility that AI could bring back editing.
They have AI finding reasons to reject totally valid requests
They are putting to court that this is a software bug and they should not be liable.
That will be the standard excuse. I hope it does not work.
Someone has to turn off their brain completely and just follow the instructions as-is. Then log the locations where the documentation wasn't clear enough or assumed some knowledge that wasn't given in the docs.
If the business can no longer justify 5 engineers, then they might only have 1.
I've always said that we won't need fewer software developers with AI. It's just that each company will require fewer developers but there will be more companies.
IE:
2022: 100 companies employ 10,000 engineers
2026: 1000 companies employ 10,000 engineers
The net result is the same for emplyoment. But because AI makes it that much more efficient, many businesses that weren't financially viable when it needed 100 engineers might become viable with 10 engineers + AI.
Five engineers could be turned into maybe two, but probably not less.
It's the 'bus factor' at play. If you still want human approvals on pull requests then If one of those engineers goes on vacation or leaves the company you're stuck with one engineer for a while.
If both leave then you're screwed.
If you're a small startup, then sure there are no rules and it's the wild west. One dev can run the world.
Peak productivity has always been somewhere between 1-3 people, though if any one of those people can't or won't continue working for one reason or another, it's generally game over for the project. So you hire more.
This is why small software startups time and time again manage to run circles around with organizations with much larger budgets. A 10 person game studio like Team Cherry can release smash hit after smash hit, while Ubisoft with 170,000% the personnel count visibly flounders. Imagine doing that in hardware, like if you could just grab some buddies and start a business successfully competing with TSMC out of your garage. That's clearly not possible. But in software, it actually is.
Is the tech writers backlog also seemingly infinite like every tech backlog I've ever seen?
I am not even quite sure I know how to manage a team of more than two programmers right now. Opus 4.5, in the hands of someone who knows what they are doing, can develop software almost as fast as I can write specs and review code. And it's just plain better at writing code than 60% of my graduating class was back in the day. I have banned at least one person from ever writing a commit message or pull request again, because Claude will explain it better.
Now, most people don't know to squeeze that much productivity out of it, most corporate procurement would take 9 months to buy a bucket if it was raining money outside, and it's possible to turn your code into unmaintainable slop at warp speed. And Claude is better at writing code than it is at almost anything else, so the rest of y'all are safe for a while.
But if you think that tech writers, or translators, or software developers are the only people who are going to get hit by waves of downsizing, then you're not paying attention.
Even if the underlying AI tech stalls out hard and permanently in 2026, there's a wave of change coming, and we are not ready. Nothing in our society, economy or politics is ready to deal with what's coming. And that scares me a bit these days.
Only because it has access to vast amount of sample code to draw a re-combine parts. Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?
I'm starting to think about a risk of technological stagnation in many areas.
I was terrible writer, but we had to write good docs and make it easy for our customers to integrate with our products. So, I prepared the context to our tech writers and they have created nice documentation pages.
The cycle was (reasonably takes 1 week, depending on tech writer workload):
Today its different: Tech writers were doing amazing job of course, but I can get 90-95% quality in 1% of the time spend for that work.People boast about the gains with LLMs all the damn time and I'm sceptical of it all unless I see their inputs.
But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.
Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.
This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."
Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.
Technical writing is part of the job of software engineering. Just like “tester” or “DBA”, it was always going to go the way of the dodo.
If you’re a technical writer, now’s the time to reinvent yourself.
You're going to get some text out of a typical engineer, but the writing quality, flow, and fit for the given purpose is not going to come close to someone who does it every day.
Where I work we have professional technical writers and the quality vs your typical SW engineer is night and day. Maybe you got lucky with the rare SW engineer that can technical write.
Why should I hire a dedicated writer if I have people with better understanding of the system? Also worth noting that like in any profession the most writers are... mediocre. Especially when you hire someone on contract. I had mostly bad experience with them in past. They happily charge $1000 for a few pages of garbage that is not even LLM-quality. No creativity, just pumping out words.
I can chip in like $20 to pay some "good writer" that "observes, listens and understands" for writing documentation on something and compare it with LLM-made one.
"Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
Let's compare!
The parts about hallucinations and processes are also a bit dated. We're either at, or very close to the point where "agentic" stuff works in a "GAN" kind of way to "produce docs" -> read docs and try to reproduce -> resolve conflicts -> loop back, that will "solve" both hallucinations and processes, at least at the quality of human-written docs. My bet is actually better in some places. Bitter lesson and all that. (at least for 80% of projects, where current human written docs are horrendous. ymmv. artisan projects not included)
What I do agree with is that you'll still want someone to hold accountable. But that's just normal business. This has been the case for integrators / 3rd party providers since forever. Every project requiring 3rd party people still had internal folks that were held accountable when things didn't work out. But, you probably won't need 10 people writing docs. You can hold accountable the few that remain.
But most people aren't that great at their jobs.
Why?
Because the legal catastrophe that will follow will entertain me so very very much.
Hopefully they used AI to write this.
There are plenty of people who can read code who don't work as devs. You could ask the same about testers, ops, sysadmins, technical support, some of the more technical product managers etc. These roles all have value, and there are people who enjoy them.
Worth noting that the blog post isn't just about documenting code. There's a LOT more to tech writing than just that niche. I still remember the guy whose job was writing user manuals for large ship controls, as a particularly interesting example of where the profession can take you.
Also, the primary task of a tech writer isn't to document code. They're supposed to write tutorials, user guides, how to guides, explanations, manuals, books, etc.