I find myself more and more going to ask AI instead of going to man pages or docs, or even specialized books on these topics these days. How would you also as an author compete against that?
A colleague asked me for advice on being a manager. Amongst other advice, I suggested he read High Output Management.
He came back a few days later and said that chatGPT knew what was in the book - was it okay if he just read the summary?
I wasn’t sure what to say. It’s probably true that chatGPT can summarize all the main points from the book. But it has always been easy to find the key points from books on line. The hard part of being a manager is figuring out how to take the obvious instructions and act on them consistently.
Maybe some people can do that just be reading the summary. For me though, reading the whole book is important. I find myself thinking back to the examples used to illustrate the points. And I find that repeating ideas in different ways as I read the book helps make them part of my mental framework. I read lots of interesting ideas in quick articles, but they rarely stick with me unless there is a specific translation to action.
I ended up telling my colleague that it was up to him to decide how to learn best. If it was me, I need the book. But he needs to know his own learning system.
Books exist for people who want in depth information with the full context, in an organized manner.
Short forms have always been available, it be blog posts, Wikipedia articles, cliff notes, or other such things. Books survive, because source material is needed to generate all of those other things, and those short form versions don’t cut it for everyone. I don’t see LLMs as any different.
A book can tell you something you didn’t know. With an LLM you need to know enough to ask.
Not really. You can say "I'd like to learn about X", and get a high level overview about the topic you're interested in. Then just ask more specific questions, just like you would do when interacting with a human teacher. Try it with Claude or 4o.
That seems very inefficient. And dangerous as it's well known that LLMs will hallucinate. When I want to Learn about X, I just take an introductory book, look at the table of contents and the description to know what it covers, then just read it. I then have several paths I can take to further my knowledge.
> Then just ask more specific questions, just like you would do when interacting with a human teacher
And no one interacts like that with a teacher, at least from what I know. They have a syllabus for a reason, and while they may answer an unrelated question, they expect you to master the fundamental that leads to it. If you don't they will point you to it, because the answer will be difficult to understand. It's like wanting to know how nuclear works without learning atom composition first.
Yeah I completely agree with this! I really like books for learning because they do exactly this.
Take the Rust book, you have a neat and organised collection of the majority of Rust features you're likely to use.
An LLM might be handy in answering questions like "Why is the borrow checker failing this code?" but that's a really different proposition to getting a detailed and complete summary of Rust's key features. It could maybe output something along these lines, but I think the output would be considerably less usable and reliable than a book.
The moment you gain some expertise in a subject, LLMs fail horribly. Because you will have a mind model of the thing you're working with, LLM won't be able to solve what you can't, as it will often require a deep dive into the internals. And that's when you want a complete reference/manual nearby. As for boilerplate, most experts have project templates or can extract one from an old project.
LLMs do not generate new content, they just shuffle old content together in new ways. So no, it does not kill an industry of people creating new original content. And authors only need to worry about it if they were not adding anything new to the world to begin with, and were instead relying on marketing to sell re-packaged existing content.
As we progress into our inevitable AI future, I have to wonder whether good source materials (like books) will more or less die out and AI-generated content will be shuffled so much that it’s nonsensical and useless, thereby kicking off a new cycle of human-generated output.
I never left RSS but social media like TikTok and X have me wondering if I’m ever reading human output or I’m just consuming and interacting with AI systems.
I recently had a very red-pill dystopian experience where I figured out someone I interacted with on X was an AI. It slipped up on a response that I recognized as a common LLM idiom. Further inquiries confirmed. It turns out that a lot of X in particular is AI bots.
> LLMs do not generate new content, they just shuffle old content together in new ways
You can say the same thing about most technical books. They're quite often little more than a more digestible summary of what you get in docs and reference manuals, with some toy examples.
Yeah, but it's structured in a way to encourage learning and "true". A map is good. A map with annotations and routes drawn is better even if the focus is on some landmarks and some elements are missing. A fantasy map is worthless.
I think you might be confusing different activities that look similar.
Search, research, exploratory reading, browsing, fact checking, cross
referencing, debunking, genealogy, making etymological and
epistemological connections are all different things. As an author and
researcher I produce and consume a lot more types of connections and
paths than a simple neural net that can make fast associations on past
training material can offer. YMMV.
Generative AI is just the new "bottom" in terms of quality. All that you have to do to compete against it is to be a little better than it. The question to me really is whether the quality of this new "bottom" is adequate. Sometimes it is for some people and for some applications, and sometimes it is not.
I do not use it myself because I am a researcher and I often ask questions that don't have a lot of "training data" yet. And even if an area is well covered in terms of "training data", often there is a lot of "know how" that really isn't written down in an easily digestible form. It is passed verbally or through examples in person. So the idea that the "training data" is complete is also not true in general.
Many other people in this thread have already covered that books are much more structured and organized than any answer generative AI gives you. Let me discuss another reason why books still matter. Books can give you a wider view than the "consensus" that something like ChatGPT gives you. I know a lot of books in my field that derive results in different ways, and I often find value in these different approaches. Moreover, suppose that only one book answers the question that you want answered but others gloss over that subject. Generative AI likely will not know precisely what one random book said on the subject, but if you were searching through multiple books on the subject yourself, you likely would pick up on this difference.
Relevant Paul Graham quote [1]:
> We can't all use AI. Someone has to generate the training data.
Authors compete by being competent, doing research, and outputting factual information. Or just, you know, being original. In a world where LLMs can’t even differentiate between a recipe and an old Reddit joke and tell you to put glue on pizza, it is absurd to think they “killed the book industry”.
What’s with this bloody obsession of killing other products and industries? Every time someone farts in tech, everyone starts shouting that it just killed something else. Calm down. Relax a little bit and get some perspective. You’re drowning yourself in the Kool-Aid.
LLMs did not kill the book industry, just like Bitcoin did not kill the world’s financial system.
… it’s not really the subject of the question. The trend is more people moving to digital books, and those will be the most common first interactions people have. Those qualities you mention are nostalgic, like the sound of vinyl…
He came back a few days later and said that chatGPT knew what was in the book - was it okay if he just read the summary?
I wasn’t sure what to say. It’s probably true that chatGPT can summarize all the main points from the book. But it has always been easy to find the key points from books on line. The hard part of being a manager is figuring out how to take the obvious instructions and act on them consistently.
Maybe some people can do that just be reading the summary. For me though, reading the whole book is important. I find myself thinking back to the examples used to illustrate the points. And I find that repeating ideas in different ways as I read the book helps make them part of my mental framework. I read lots of interesting ideas in quick articles, but they rarely stick with me unless there is a specific translation to action.
I ended up telling my colleague that it was up to him to decide how to learn best. If it was me, I need the book. But he needs to know his own learning system.
Short forms have always been available, it be blog posts, Wikipedia articles, cliff notes, or other such things. Books survive, because source material is needed to generate all of those other things, and those short form versions don’t cut it for everyone. I don’t see LLMs as any different.
A book can tell you something you didn’t know. With an LLM you need to know enough to ask.
Not really. You can say "I'd like to learn about X", and get a high level overview about the topic you're interested in. Then just ask more specific questions, just like you would do when interacting with a human teacher. Try it with Claude or 4o.
> Then just ask more specific questions, just like you would do when interacting with a human teacher
And no one interacts like that with a teacher, at least from what I know. They have a syllabus for a reason, and while they may answer an unrelated question, they expect you to master the fundamental that leads to it. If you don't they will point you to it, because the answer will be difficult to understand. It's like wanting to know how nuclear works without learning atom composition first.
Take the Rust book, you have a neat and organised collection of the majority of Rust features you're likely to use.
An LLM might be handy in answering questions like "Why is the borrow checker failing this code?" but that's a really different proposition to getting a detailed and complete summary of Rust's key features. It could maybe output something along these lines, but I think the output would be considerably less usable and reliable than a book.
I never left RSS but social media like TikTok and X have me wondering if I’m ever reading human output or I’m just consuming and interacting with AI systems.
I recently had a very red-pill dystopian experience where I figured out someone I interacted with on X was an AI. It slipped up on a response that I recognized as a common LLM idiom. Further inquiries confirmed. It turns out that a lot of X in particular is AI bots.
You can say the same thing about most technical books. They're quite often little more than a more digestible summary of what you get in docs and reference manuals, with some toy examples.
You will have shallower knowledge than the person who reads good source content.
Do you want broad but very shallow knowledge?
I’d much rather cast a deep net that AI slop can’t touch.
I say this as someone that has used most LLM tools. They are tools, not replacements. And they are remarkably shallow but great at appearing “magical”
I do not use it myself because I am a researcher and I often ask questions that don't have a lot of "training data" yet. And even if an area is well covered in terms of "training data", often there is a lot of "know how" that really isn't written down in an easily digestible form. It is passed verbally or through examples in person. So the idea that the "training data" is complete is also not true in general.
Many other people in this thread have already covered that books are much more structured and organized than any answer generative AI gives you. Let me discuss another reason why books still matter. Books can give you a wider view than the "consensus" that something like ChatGPT gives you. I know a lot of books in my field that derive results in different ways, and I often find value in these different approaches. Moreover, suppose that only one book answers the question that you want answered but others gloss over that subject. Generative AI likely will not know precisely what one random book said on the subject, but if you were searching through multiple books on the subject yourself, you likely would pick up on this difference.
Relevant Paul Graham quote [1]:
> We can't all use AI. Someone has to generate the training data.
[1] https://x.com/paulg/status/1635672262903750662
'Ten things to know about being a manager' and similar aren't specialist books.
What’s with this bloody obsession of killing other products and industries? Every time someone farts in tech, everyone starts shouting that it just killed something else. Calm down. Relax a little bit and get some perspective. You’re drowning yourself in the Kool-Aid.
LLMs did not kill the book industry, just like Bitcoin did not kill the world’s financial system.
The feel of quality paper.
The way the spine cracks when you first open a book.
The way the spine creases after you've read a book a few times.
https://tapas.io/episode/516960
… it’s not really the subject of the question. The trend is more people moving to digital books, and those will be the most common first interactions people have. Those qualities you mention are nostalgic, like the sound of vinyl…
https://www.audiosciencereview.com/forum/index.php?attachmen...
… and will go by the wayside as more of the people who care die.