This seems to confirm my feeling when using AI too much. It's easy to get started, but I can feel my brain engaging less with the problem than I'm used to. It can form a barrier to real understanding, and keeps me out of my flow.
I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.
So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.
So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.
That's a nice anecdote, and I agree with the sentiment - skill development comes from practice. It's tempting to see using AI as free lunch, but it comes with a cost in the form of skill atrophy. I reckon this is even the case when using it as an interactive encyclopedia, where you may lose some skill in searching and aggregating information, but for many people the overall trade off in terms of time and energy savings is worth it; giving them room to do more or other things.
We need to avoid the trap of using AI to circumvent understanding. I think that’s where most problems with AI lie.
If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.
But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI shines.
A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.
This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box."
That said, I'm not sure "they've always been wrong before" proves they're wrong now.
Where I'm skeptical of this study:
- 54 participants, only 18 in the critical 4th session
- 4 months is barely enough time to adapt to a fundamentally new tool
- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?
- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch
Where the study might have a point:
Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.
idk, if anything I’m thinking more. The idea that I might be able to build everything I’ve ever planned out. At least the way I’m using them, it’s like the perfect assistive device for my flavor of ADHD — I get an interactive notebook I can talk through crazy stuff with. No panacea for sure, but I’m so much higher functioning it’s surreal. I’m not even using em in the volume many folks claim, more like pair programming with a somewhat mentally ill junior colleague. Much faster than I’d otherwise be.
this actually does include a crazy amount of long form latex expositions on a bunch of projects im having a blast iterating on. i must be experiencing what its almost like not having adhd
Maybe it’s not that we’re getting stupid because we don’t use our brains anymore.
It’s more like having a reliable way to make fire — so we stop obsessing over sparks and start focusing on building something more important.
I can definitely relate to the abstract at least. While I am more productive now, and I am way more excited about working on longer term projects (especially by myself), I have found that the minutia is way more strenuous than it was before. I think that inhibits my ability to review what the LLM is producing.
I haven't been diagnosed with ADHD or anything but i also haven't been tested for it. It's something I have considered but I think it's pretty underdiagnosed in Spain.
I encourage folks to listen to brilliant psychologist for software teams Cat Hicks [1] and her wife, teaching neuroscientist Ashley Juavinett [2] on their excellent podcast, Change, Technically discussing the myriad problems with this study: https://www.buzzsprout.com/2396236/episodes/17378968
My friend works with people in their 20s. She recently brought up her struggles to do the math in her head for when to clock in/out for their lunches (30 minutes after an arbitrary time). The young coworker's response was "Oh I just put it into ChatGPT"
This has been the same argument since the invention of pen and paper.
Yes, the tools reduce engagement and immediate recall and memory, but also free up energy to focus on more and larger problems.
Seems to focus only on the first part and not on the other end of it.
Without the engagement on the material you are studying you will not have the context to know and therefore focus on the larger problem. Deep immersion in the material allows you to make the connections. With AI spoon feeding you will not have that immersion.
Druids used to decry that literacy caused people to lose their ability to memorize sacred teachings. And they’re right! But literacy still happened and we’re all either dumber or smarter for it.
Smartphones I think did the most damage. Used to be you had to memorize people's phone numbers. I'm sure other things like memorizing how to get from your house to someone else is also less cognitive when the GPS just tells you every time, instead of you busting out a map, and thinking about your route. I've often found that if I preview a route I'm supposed to take, and use Google Street Maps to physically view key / unfamiliar parts of my route, I am drastically less likely to get lost, because "oh this looks familiar! I turn right here!"
My wife had a similar experience, she had some college project where they had to drive up and down some roads and write about it, it was a group project, and she bought a map, and noticed that after reading the map she was more knowledgeable about the area than her sister who also grew up in the same area.
I think AI is a great opportunity for learning more about your subjects in question from books, and maybe even the AI themselves by asking for sources, always validate your intel from more authoritative sources. The AI just saved you 10 minutes? You can spend those 10 minutes reading the source material.
It's more complex than that. The three pillars of learning are theory (finding out about the thing), practice (doing the thing) and metacognition (being right, or more importantly, wrong. And correcting yourself.). Each of those steps reinforce neural pathways. They're all essential in some form or another.
Literacy, books, saving your knowledge somewhere else removes the burden of remembering everything in your head. But they don't come into effect into any of those processes. So it's an immensely bad metaphor. A more apt one is the GPS, that only leaves you with practice.
That's where LLMs come in, and obliterate every single one of those pillars on any mental skill. You never have to learn a thing deeply, because it's doing the knowing for you. You never have to practice, because the LLM does all the writing for you. And of course, when it's wrong, you're not wrong. So nothing you learn.
There are ways to exploit LLMs to make your brain grow, instead of shrink. You could make them into personalized teachers, catering to each student at their own rhythm. Make them give you problems, instead of ready-made solutions. Only employ them for tasks you already know how to make perfectly. Don't depend on them.
But this isn't the future OpenAI or Anthropic are gonna gift us. Not today, and not in a hundred years, because it's always gonna be more profitable to run a sycophant.
If we want LLMs to be the "better" instead of the "worse", we'll have to fight for it.
> Make them give you problems, instead of ready-made solutions
Yes, this is one of my favorite prompting styles.
If you're stuck on a problem, don't ask for a solution, ask for a framework for addressing problems of that type, and then work through it yourself.
Can help a lot with coming unstuck, and the thoughts are still your own. Oftentimes you end up not actually following the framework in the end, but it helps get the ball rolling.
Right, nobody gains much of anything by memorizing logarithm tables. But letting the machine tell you what even you can do with a logarithm takes away from your set of abilities, without other learning to make up for it.
Or, irony was being employed and Socrates wasn’t against books, but was instead noting it’s the powerful who are against them for their facilitating the sharing of ideas across time and space more powerfully than the spoken word ever could. The books are why we even know his name, let alone the things said.
An obvious comparison is probably the habitual usage of GPS navigation. Some people blindly follow them and some seemingly don't even remember routes they routinely take.
I found a great fix for this was to lock my screen maps to North-Up. That teaches me the shape of the city and greatly enhances location/route/direction awareness.
It’s cheap, easy, and quite effective to passively learn the maps over the course of time.
My similar ‘hack’ for LLMs has been to try to “race” the AI. I’ll type out a detailed prompt, then go dive into solving the same problem myself while it chews through thinking tokens. The competitive nature of it keeps me focused, and it’s rewarding when I win with a faster or better solution.
That's a great tip, but I know some people hate that because there is some cognitive load if they rely more on visuals and have to think more about which way to turn or face when they first start the route, or have to make turns on unfamiliar routes.
I also wanted to mention that just spending some time looking at the maps and comparing differences in each services' suggested routes can be helpful for developing direction awareness of a place. I think this is analogous to not locking yourself into a particular LLM.
Lastly, I know that some apps might have an option to give you only alerts (traffic, weather, hazards) during your usual commute so that you're not relying on turn-by-turn instructions. I think this is interesting because I had heard that many years ago, Microsoft was making something called "Microsoft Soundscape" to help visually impaired users develop directional awareness.
It is hard to gain some location awareness and get better at navigating without extra cognitive load. You have to actively train your brain to get better, there is no easy way that I know of.
I try using north-up for that reason, but it loses the smart-zooming feature you get with the POV camera, like zooming in when you need to perform an action, and zooming back out when you're on the highway.
I was shocked into using it when I realized that when using the POV GPS cam, I couldn't even tell you which quadrant of the city I just navigated to.
I haven't tried this technique yet, sounds interesting.
Living in a city where phone-snatching thieves are widely reported on built my habit of memorising the next couple steps quickly (e.g. 2nd street on the left, then right by the station), then looking out for them without the map. North-Up helps anyways because you don't have to separately figure out which erratic direction the magnetic compass has picked this time (maybe it's to do with the magnetic stuff I EDC.)
Yeah, I'm a North-Up cult member too, after seeing a behind the scenes video of Jeremy Clarkson from Top Gear suggesting it, claiming "never get lost again".
This is rather scary. Obviously, it makes me think of my own personal over-reliance on GPS, but I am really worried about a young relative of mine, whose car will remain stationary for as long as it takes to get a GPS lock... indefinitely.
This is one I've never found really affects me - I think because I just always plan that the third or fourth time I go somewhere I won't use the navigation, so you are in a mindset of needing to remember the turns and which lane you should be in etc.
Not sure how that maps onto LLM use, I have avoided it almost completely because I've seen coleagues start to fall into really bad habits (like spending days adjusting prompts to try and get them to generate code that fixes an issue that we could have worked through together in about two hours), I can't see an equivalent way to not just start to outsource your thinking...
Some people have the ability to navigate with land markers quickly and some people don't.
I saw this first hand with coworkers. We would have to navigate large builds. I could easily find my way around while others did not know to take a left or right hand turn off the elevators.
That ability has nothing to do with GPS. Some people need more time for their navigation skills to kick in. Just like some people need to spend more time on Math, Reading, Writing, ... to be competent compared to others.
I think it has much to do with the GPS. Having a GPS allows you to turn off your brain: you just go on autopilot. Without a GPS you actually have to create and update a mental model of where you are and where you are going to: maybe preplan your route, count the doors, list a sequence of left-right turns, observe for characteristic landmarks and commit them to memory. Sure, it is a skill, but it is sure to not be developed if there's no need for it. I suspect it's similar with AI-assisted coding or essay writing.
When I have to put together a quick fix. I reach out to Claude Code these days. I know I can give it the specifics and, Im my recent experience, it will find the issue and propose a fix. Now, I have two options: I can trust it or I can dig in myself and understand why it's happening myself. I sacrifice gaining knowledge for time. I often choose the later, and put my time in areas I think are more important than this, but I'm aware of it.
If you give up your hands-on interaction with a system, you will lose your insight about it.
When you build an application yourself, you know every part of it. When you vibe code, trying to debug something in there is a black box of code you've never seen before.
That is one of the concerns I have when people suggest that LLMs are great for learning. I think the opposite, they're great for skipping 'learning' and just get the results. Learning comes from doing the grunt work.
I use LLMs to find stuff often, when I'm researching or I need to write an ADR, but I do the writing myself, because otherwise it's easy to fall into the trap of thinking that you know what the 'LLM' is talking about, when in fact you are clueless about it. I find it harder to write about something I'm not familiar with, and then I know I have to look more into it.
I think LLMs can be great for learning, but not if you're using them to do work for you. I find them most valuable for explaining concepts I've been trying to learn, but have gotten stuck and am struggling to find good resources for.
> I think the opposite, they're great for skipping 'learning' and just get the results.
yes, and cars skip the hours of walking, planes skip weeks of swimming, calculators skip the calculating ...
Curious what the long-term effects from the current LLM-based "AI" systems embedded in virtually everything and pushed aggressively will be in let's say 10 years, any strong opinions or predictions on this topic?
If we focus only on the impact on linguistics, I predict things will go something like this:
As LLM use normalizes for essay writing (email, documentation, social media, etc), a pattern emerges where everyone uses an LLM as an editor. People only create rough drafts and then have their "editor" make it coherent.
Interestingly, people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles. Despite this, vocabulary and semantics as a whole become more uniform. Spelling errors and typos become increasingly rare.
In parallel, people start using LLMs to summarize content in a style they prefer.
Both sides of this gradually converge. Content gets explicitly written in a way that is optimized for consumption by an LLM, perhaps a return to something like the semantic web. Authors write content in a way that encourages a summarizing LLM to summarize as the author intends for certain explicit areas.
Human languages start to evolve in a direction that could be considered more coherent than before, and perhaps less ambiguous. Language is the primary interface an LLM uses with humans, so even if LLM use becomes baseline for many things, if information is not being communicated effectively then an LLM would be failing at its job. I'm personifying LLMs a bit here but I just mean it in a game theory / incentive structure way.
> people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles
We're already seeing people use AI to express themselves in several contexts, but it doesn't lead to an increased range of styles. It leads to one style, the now-ubiquitous upbeat LinkedIn tone.
Theoretically we could see diversification here, with different tools prompting towards different voices, but at the moment the trend is the opposite.
>People only create rough drafts and then have their "editor" make it coherent.
While sometimes I do dump a bunch of scratch work and ask for it to be transformed into organized though, more often I find that I use LLM output the opposite way.
Give a prompt. Save the text. Reroll. Save the text. Change the prompt, reroll. Then going through the heap of vomit to find the diamonds. Sort of a modern version of "write drunk, edit sober" with the LLM being the alcohol in the drunk half of me. It can work as a brainstorming step to turn fragments of though into a bunch of drafts of thought, then to be edited down into elegant thought. Asking the LLM to synthesize its drafts usually discards the best nuggets for lesser variants.
Most people will continue to become dumber. Some people will try to embrace and adapt. They will become the power-stupids. Others will develop a sort of immune reaction to AI and develop into a separate evolutionary family.
"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."
Imo programming is fairly different between vibes based not looking at it at all and using AI to complete tasks. I still feel engaged when I'm more actively "working with" the AI as opposed to a more hands off "do X for me".
I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.
Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.
I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.
I find it very useful for code comprehension. For writing code it still struggles (at least codex) and sometimes I feel I could have written the code myself faster rather than correct it every time it does something wrong.
Jeremy Howard argues that we should use LLMs to help us learn, once you let it reason for you then things go bad and you start getting cognitive debt. I agree with this.
AI is not a great partner to code with. For me I just use it to do some boilerplates and fill in the tedious gaps. Even for translations its bad if you know both languages. The biggest issues is that AI constantly tries to steer you wrong, its very subtle in programming that you only realize it a week later when you get stuck in a vibe coding quagmire.
shrug YMMV. I was definitely a bit of of a luddite for a while, and I still definitely don't consider myself an "AI person", but I've found them useful. I can have them do legitimately useful things, with varying degrees of supervision.
I wouldn't ask Cursor to go off and write software from scratch that I need to take ownership of, but I'm reasonably comfortable at this point having it make small changes under direction and with guidance.
The project I mentioned above was adding otel tracing to something, and it wrote a tracae viewing UI that has all the features I need and works well, without me having to spend hours getting it up set up.
the article suggests that the LLM group had better essays as graded by both human and AI reviewers, but they used less brain power
this doesn't seem like a clear problem. perhaps people can accomplish more difficult tasks with LLM assistance, and in those more difficult tasks still see full brain engagement?
using less brain power for a better result doesn't seem like a clear problem. it might reveal shortcomings in our education system, since these were SAT style questions. I'm sure calculator users experience the same effects vs mental mathematics
I've definitely noticed an association between how much I vibe code something and how good my internal model of the system is. That bit about LLM users not being able to quote their essay resonates too: "oh we have that unit test?"
I try my best to make meta-comments sparingly, but, it's worth noting the abstract linked here isn't really that long. Gloating that you didn't bother to read it before commenting, on a brief abstract for a paper about "cognitive debt" due to avoiding the use of cognitive skills, has a certain sad irony to it.
The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.
We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.
How can you validate ML content when you don't have educated people?
Thinking everything ML produces is just shorting the brain.
I see AI wars as creating coherent stories. Company X starts using ML and they believe what was produced is valid and can grow their stock. Reality is that Company Y poised the ML and the product or solution will fail, not right away but over time.
Talking to LLMs reminds me of arguing with a certain flavor of Russian. When you clarify based on a misunderstanding of theirs, they act like your clarification is a fresh claim which avoids them ever having to backpedal. It strikes me as intellectually dishonest in a way I find very grating. I do find it interesting though as the incentives that produce the behavior in both cases may be similar.
When you're done, let us know so we can aggregate your summarized comment with the rest of the thread comments to back out key, human informed, findings.
I think a lot more people, especially at the higher end of the pay scale, are in some kind of AI psychosis. I have heard people at work talk about how they are using chatGPT to quick health advice, some are asking it for gym advice and others are just saying they just dump entire research reports into it and get the summary.
What does using a chat agent have to do with psychosis? I assume this was also the case when people googled their health results, googled their gym advice and googled for research paper summaries?
As long as you're vetting your results just like you would any other piece of information on the internet then it's an evolution of data retrieval.
this is just what AI companies say so they are not held responsibly for any legal issues, if a person is searching for summary of a paper, surely they don't have time to vet the paper.
Pathologising those who disagree with a current viewpoint follows a long and proud tradition. "Possessed by demons" of yesteryear, today it's "AI psychosis".
Yes. Similar to the mass psychosis we were hearing about during COVID in relation to asking particular questions and demonstrating curiosity about controversial topics.
Seems to have somehow been replaced with this AI psychosis?
> Similar to the mass psychosis we were hearing about during COVID
Can you be more specific and/or provide some references? The "demonstrating curiosity about controversial topics" part is sounding like vaccine skepticism, though I don't recall ever hearing that being referred to as any kind of "psychosis".
Noting that it is straw man to connect my argument with vaccine skepticism.
The mass psychosis was that early on in the COVID response, we were hearing so much early advice from people that were ahead of CDC/FDA, things like:
- Masks work (CDC/FDA discouraged, then flip-flopped and took credit for these things) despite it originating from Scott Alexander and skeptic communities like his, I also heard it from Tim Ferriss
- Ivermectin, Mega dosing Vitamins like Vitamin D and C, Povidone Iodine (known disinfectant people use: claimed to be "bleach" by misinformation media) - we know they still have Little to no downside and the psychosis was to label any critical thinking about ideas like nutrition and personal health to help with "COVID" as anti-COVID and anti-vaccine. Psychosis like attack, straw mans, Ad Hominems shutting down critical thinking and curiosity as psychosis
- Asking about "Hey if I got COVID before, that immunity is as robust if not more than vaccine, what evidence supports I need the vaccine?" was shut down despite it being robust and sound questioning to ask. Curiosity was shut down, psychosis was to jump on all questioners as anti-vaccine and vaccine skeptics, calling them murderers often by sensationalist papers.
Does that answer your question, and feel referential for you. Let me know what you are expecting and I can deliver better references. I think you've heard about or are probably familiar with all the examples I used though.(Another psychosis I just thought of: To this day the hostile, discriminatory, lock-step vocal cancel-culture class of opinion that was blindly sent to anyone who questioned mainstream covid policy during that time was so much like the biggest example of psychosis I've ever seen. That wa when I first heard of the term "mass psychosis")
I’m gonna make a new study one where I give the participant really shitty tools and one more give them good tools to build something and see which one takes more brain power
Agreed. "Reduced muscle development in farmers using a tractor mounted plow: Over four months, mechanical plow users consistently underperformed at lifting weights with respect to the control group who had been using spades. These results raise concerns about the long-term implications of tractor mounted plow reliance and underscore the need for deeper inquiry into tractor mounted plow role in farming."
I'm very impressed. This isn't a paper so much as a monograph. And I'm very inclined to agree with the results of this study, which makes me suspicious. To what journal was this submitted? Where's the peer review? Has anyone gone through the paper (https://arxiv.org/pdf/2506.08872) and picked it apart?
I love the parts where they point out that human evaluators gave wildly different evaluations as compared to an AI evaluator, and openly admitted they dislike a more introverted way of writing (fewer flourishes, less speculation, fewer random typos, more to the point, more facts) and prefer texts with a little spunk in it (= content doesn't ultimately matter, just don't bore us.)
"Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning."
“LLM users also struggled to accurately quote their own work” - why are these studies always so laughably bad?
The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?
i think i can guess this article without reading it: ive never been on major drugs, even medically speaking yet using AI makes me feels like i am on some potent drug that eating my brain. what's state management? what's this hook? who cares, send it to claude or whatever
It's just a different way of writing code. Today you at least need to understand best practices to help steer towards a good architecture. In the near future there will be no developers needed at all for the majority of apps.
> In the near future there will be no developers needed at all for the majority of apps.
Software CEOs think about this and rub their hands together thinking about all the labor costs they will save creating apps, without thinking one step further and realizing that once you don't need developers to build the majority of apps your would-be customers also don't need the majority of apps at all.
They can have an LLM build their own customized app (if they need to do something repeatedly, or just have the LLM one-off everything if not).
Or use the free app that someone else built with an LLM as most app categories race to the moatless bottom.
I hate it, but I'm actually counting on this and how it affects my future earning potential as part of my early(ish) retirement plan!
I do use them, and I also still do some personal projects and such by hand to stay sharp.
Just: they can't mint any more "pre-AI" computer scientists.
A few outliers might get it and bang their head on problems the old way (which is what, IMO, yields the problem-solving skills that actually matter) but between:
* Not being able to mint any more "pre-AI" junior hires
And, even if we could:
* Great migration / Covid era overhiring and the corrective layoffs -> hiring freezes and few open junior reqs
* Either AI or executives' misunderstandings of it and/or use of it as cover for "optimization" - combined with the Nth wave of offshoring we're in at the moment -> US hiring freezes and few open junior reqs
* Jobs and tasks junior hires used to cut their teeth on to learn systems, processes, etc. being automated by AI / RPA -> "don't need junior engineers"
The upstream "junior" source for talent our industry needs has been crippled both quantitatively and qualitatively.
We're a few years away from a _massive_ talent crunch IMO. My bank account can't wait!
Yes, yes. It's analogous to our wizzardly greybeard ancestors prophesying that youngsters' inability to write ASM and compile it in their heads would bring end of days, or insert your similar story from the 90s or 2000s here (or printing press, or whatever).
Order of "dumbing down" effect in a space that one way or another always eventually demands the sort of functional intelligence that only rigorous, hard work on hard problems can yield feels completely different, though?
no, that isn't accurate. One of the key points is that those previously relying on the LLM still showed reduced cognitive engagement after switching back to unaided writing.
The fourth session, where they tested switching back, was about recall and re-engagement with topics from the previous sessions, not fresh unaided writing. They found that the LLM users improved slightly over baseline, but much less than the non-LLM users.
"While these LLM-to-Brain participants demonstrated substantial
improvements over 'initial' performance (Session 1) of Brain-only group, achieving significantly
higher connectivity across frequency bands, they consistently underperformed relative to
Session 2 of Brain-only group, and failed to develop the consolidation networks present in
Session 3 of Brain-only group."
The study also found that LLM-group was largely copy-pasting LLM output wholesale.
Original poster is right: LLM-group didn't write any essays, and later proved not to know much about the essays. Not exactly groundbreaking. Still worth showing empirically, though.
If you wrote two essays, you have more 'cognitive engagement' on the clock as compared to the guy who wrote one essay.
In other news: If you've been lifting in the gym for a week, you have more physical engagement than the guy who just came in and lifted for the first time.
Isn't the point of a lot of science to empirically demonstrate results which we'd otherwise take for granted as intuitive/obvious? Maybe in AI-literature-land everything published is supposed to be novel/surprising, but that doesn't encompass all of research, last I checked.
If the title of your study both makes a neurotoxin reference ("This is your brain on drugs", egg, pan, plus pearl-clutching) AND introduces a concept stolen and abused from IT and economics (cognitive debt? Implies repayment and 'refactoring', that is not what they mean, though) ... I expect a bit more than 'we tested this very obvious common sense thing, and lo and behold, it is just as a five year old would have predicted.'
You are right about the content, but it's still worth publishing the study. Right now, there's an immense amount of money behind selling AI services to schools, which is founded on the exact opposite narrative.
Good. Humans don’t need to waste their mental energy on tasks that other systems can do well.
I want a life of leisure. I don’t want to do hard things anymore.
Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”
> Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market
I hope you’re being facetious, as otherwise that’s a selfish view which will come back to bite you. If you live in a society, what other do and how they behave affects you too.
A John Green quote on public education feels appropriate:
> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.
Skill issue.
I'm far more interactive when reading with LLMs. I try things out instead of passively reading. I fact check actively. I ask dumb questions that I'd be embarrassed to ask otherwise.
There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.
I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.
So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.
So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.
If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.
But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI shines.
A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.
Or at least only use the free available token windows. Build your own limits.
Edit: Oh the AI apologist cult is downvoting, who would've guessed. Fucking spineless cocksuckers. People on HN have no ethics.
Where I'm skeptical of this study:
- 54 participants, only 18 in the critical 4th session
- 4 months is barely enough time to adapt to a fundamentally new tool
- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?
- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch
Where the study might have a point:
Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.
[Edit]: Formatting
Shift to what? This? https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
this actually does include a crazy amount of long form latex expositions on a bunch of projects im having a blast iterating on. i must be experiencing what its almost like not having adhd
I haven't been diagnosed with ADHD or anything but i also haven't been tested for it. It's something I have considered but I think it's pretty underdiagnosed in Spain.
That must be how normal people feel.
1: https://www.catharsisinsight.com 2: https://ashleyjuavinett.com
The kids are using ChatGPT for simple maths...
Seems to focus only on the first part and not on the other end of it.
My wife had a similar experience, she had some college project where they had to drive up and down some roads and write about it, it was a group project, and she bought a map, and noticed that after reading the map she was more knowledgeable about the area than her sister who also grew up in the same area.
I think AI is a great opportunity for learning more about your subjects in question from books, and maybe even the AI themselves by asking for sources, always validate your intel from more authoritative sources. The AI just saved you 10 minutes? You can spend those 10 minutes reading the source material.
Literacy, books, saving your knowledge somewhere else removes the burden of remembering everything in your head. But they don't come into effect into any of those processes. So it's an immensely bad metaphor. A more apt one is the GPS, that only leaves you with practice.
That's where LLMs come in, and obliterate every single one of those pillars on any mental skill. You never have to learn a thing deeply, because it's doing the knowing for you. You never have to practice, because the LLM does all the writing for you. And of course, when it's wrong, you're not wrong. So nothing you learn.
There are ways to exploit LLMs to make your brain grow, instead of shrink. You could make them into personalized teachers, catering to each student at their own rhythm. Make them give you problems, instead of ready-made solutions. Only employ them for tasks you already know how to make perfectly. Don't depend on them.
But this isn't the future OpenAI or Anthropic are gonna gift us. Not today, and not in a hundred years, because it's always gonna be more profitable to run a sycophant.
If we want LLMs to be the "better" instead of the "worse", we'll have to fight for it.
Yes, this is one of my favorite prompting styles.
If you're stuck on a problem, don't ask for a solution, ask for a framework for addressing problems of that type, and then work through it yourself.
Can help a lot with coming unstuck, and the thoughts are still your own. Oftentimes you end up not actually following the framework in the end, but it helps get the ball rolling.
Funny enough, the reason he gave against books has now finally been addressed by LLMs.
It’s cheap, easy, and quite effective to passively learn the maps over the course of time.
My similar ‘hack’ for LLMs has been to try to “race” the AI. I’ll type out a detailed prompt, then go dive into solving the same problem myself while it chews through thinking tokens. The competitive nature of it keeps me focused, and it’s rewarding when I win with a faster or better solution.
I also wanted to mention that just spending some time looking at the maps and comparing differences in each services' suggested routes can be helpful for developing direction awareness of a place. I think this is analogous to not locking yourself into a particular LLM.
Lastly, I know that some apps might have an option to give you only alerts (traffic, weather, hazards) during your usual commute so that you're not relying on turn-by-turn instructions. I think this is interesting because I had heard that many years ago, Microsoft was making something called "Microsoft Soundscape" to help visually impaired users develop directional awareness.
It is hard to gain some location awareness and get better at navigating without extra cognitive load. You have to actively train your brain to get better, there is no easy way that I know of.
I was shocked into using it when I realized that when using the POV GPS cam, I couldn't even tell you which quadrant of the city I just navigated to.
I wish the north-up UX were more polished.
Living in a city where phone-snatching thieves are widely reported on built my habit of memorising the next couple steps quickly (e.g. 2nd street on the left, then right by the station), then looking out for them without the map. North-Up helps anyways because you don't have to separately figure out which erratic direction the magnetic compass has picked this time (maybe it's to do with the magnetic stuff I EDC.)
https://www.nature.com/articles/s41598-020-62877-0
This is rather scary. Obviously, it makes me think of my own personal over-reliance on GPS, but I am really worried about a young relative of mine, whose car will remain stationary for as long as it takes to get a GPS lock... indefinitely.
I have to visit a place several times and with regularity to remember it. Otherwise, out it goes. GPS has made this a non-issue; I use it frequently.
For me, however, GPS didn't cause the problem. I was driving for 5 or 6 years before it became ubiquitous.
Not sure how that maps onto LLM use, I have avoided it almost completely because I've seen coleagues start to fall into really bad habits (like spending days adjusting prompts to try and get them to generate code that fixes an issue that we could have worked through together in about two hours), I can't see an equivalent way to not just start to outsource your thinking...
I saw this first hand with coworkers. We would have to navigate large builds. I could easily find my way around while others did not know to take a left or right hand turn off the elevators.
That ability has nothing to do with GPS. Some people need more time for their navigation skills to kick in. Just like some people need to spend more time on Math, Reading, Writing, ... to be competent compared to others.
https://arxiv.org/abs/2506.08872
Accumulation of cognitive debt when using an AI assistant for essay writing task - https://news.ycombinator.com/item?id=44286277 - June 2025 (426 comments)
If you give up your hands-on interaction with a system, you will lose your insight about it.
When you build an application yourself, you know every part of it. When you vibe code, trying to debug something in there is a black box of code you've never seen before.
That is one of the concerns I have when people suggest that LLMs are great for learning. I think the opposite, they're great for skipping 'learning' and just get the results. Learning comes from doing the grunt work.
I use LLMs to find stuff often, when I'm researching or I need to write an ADR, but I do the writing myself, because otherwise it's easy to fall into the trap of thinking that you know what the 'LLM' is talking about, when in fact you are clueless about it. I find it harder to write about something I'm not familiar with, and then I know I have to look more into it.
And asbestos and lead paint was actually useful.
As LLM use normalizes for essay writing (email, documentation, social media, etc), a pattern emerges where everyone uses an LLM as an editor. People only create rough drafts and then have their "editor" make it coherent.
Interestingly, people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles. Despite this, vocabulary and semantics as a whole become more uniform. Spelling errors and typos become increasingly rare.
In parallel, people start using LLMs to summarize content in a style they prefer.
Both sides of this gradually converge. Content gets explicitly written in a way that is optimized for consumption by an LLM, perhaps a return to something like the semantic web. Authors write content in a way that encourages a summarizing LLM to summarize as the author intends for certain explicit areas.
Human languages start to evolve in a direction that could be considered more coherent than before, and perhaps less ambiguous. Language is the primary interface an LLM uses with humans, so even if LLM use becomes baseline for many things, if information is not being communicated effectively then an LLM would be failing at its job. I'm personifying LLMs a bit here but I just mean it in a game theory / incentive structure way.
We're already seeing people use AI to express themselves in several contexts, but it doesn't lead to an increased range of styles. It leads to one style, the now-ubiquitous upbeat LinkedIn tone.
Theoretically we could see diversification here, with different tools prompting towards different voices, but at the moment the trend is the opposite.
Guttural vocalizations accompanied by frantic gesturing towards a mobile device, or just silence and showing of LLM output to others?
That said, if most people turn into hermits and start living in pods around this period, then I think you would be in the right direction.
While sometimes I do dump a bunch of scratch work and ask for it to be transformed into organized though, more often I find that I use LLM output the opposite way.
Give a prompt. Save the text. Reroll. Save the text. Change the prompt, reroll. Then going through the heap of vomit to find the diamonds. Sort of a modern version of "write drunk, edit sober" with the LLM being the alcohol in the drunk half of me. It can work as a brainstorming step to turn fragments of though into a bunch of drafts of thought, then to be edited down into elegant thought. Asking the LLM to synthesize its drafts usually discards the best nuggets for lesser variants.
- Socrates on Writing.
I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.
Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.
I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.
Jeremy Howard argues that we should use LLMs to help us learn, once you let it reason for you then things go bad and you start getting cognitive debt. I agree with this.
I wouldn't ask Cursor to go off and write software from scratch that I need to take ownership of, but I'm reasonably comfortable at this point having it make small changes under direction and with guidance.
The project I mentioned above was adding otel tracing to something, and it wrote a tracae viewing UI that has all the features I need and works well, without me having to spend hours getting it up set up.
this doesn't seem like a clear problem. perhaps people can accomplish more difficult tasks with LLM assistance, and in those more difficult tasks still see full brain engagement?
using less brain power for a better result doesn't seem like a clear problem. it might reveal shortcomings in our education system, since these were SAT style questions. I'm sure calculator users experience the same effects vs mental mathematics
The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.
We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.
Thinking everything ML produces is just shorting the brain.
I see AI wars as creating coherent stories. Company X starts using ML and they believe what was produced is valid and can grow their stock. Reality is that Company Y poised the ML and the product or solution will fail, not right away but over time.
It also goes against the main ethos of the AI sect to "stress-test" the AI against everything and everyone, so there's that.
https://grugbrain.dev/
Carson Gross sure knows how to stay in character.
As long as you're vetting your results just like you would any other piece of information on the internet then it's an evolution of data retrieval.
this is just what AI companies say so they are not held responsibly for any legal issues, if a person is searching for summary of a paper, surely they don't have time to vet the paper.
Seems to have somehow been replaced with this AI psychosis?
Can you be more specific and/or provide some references? The "demonstrating curiosity about controversial topics" part is sounding like vaccine skepticism, though I don't recall ever hearing that being referred to as any kind of "psychosis".
The mass psychosis was that early on in the COVID response, we were hearing so much early advice from people that were ahead of CDC/FDA, things like:
- Masks work (CDC/FDA discouraged, then flip-flopped and took credit for these things) despite it originating from Scott Alexander and skeptic communities like his, I also heard it from Tim Ferriss
- Ivermectin, Mega dosing Vitamins like Vitamin D and C, Povidone Iodine (known disinfectant people use: claimed to be "bleach" by misinformation media) - we know they still have Little to no downside and the psychosis was to label any critical thinking about ideas like nutrition and personal health to help with "COVID" as anti-COVID and anti-vaccine. Psychosis like attack, straw mans, Ad Hominems shutting down critical thinking and curiosity as psychosis
- Asking about "Hey if I got COVID before, that immunity is as robust if not more than vaccine, what evidence supports I need the vaccine?" was shut down despite it being robust and sound questioning to ask. Curiosity was shut down, psychosis was to jump on all questioners as anti-vaccine and vaccine skeptics, calling them murderers often by sensationalist papers.
Does that answer your question, and feel referential for you. Let me know what you are expecting and I can deliver better references. I think you've heard about or are probably familiar with all the examples I used though.(Another psychosis I just thought of: To this day the hostile, discriminatory, lock-step vocal cancel-culture class of opinion that was blindly sent to anyone who questioned mainstream covid policy during that time was so much like the biggest example of psychosis I've ever seen. That wa when I first heard of the term "mass psychosis")
There is a documentary called "Everything under control". In it they explained why this happened.
Basically they were scared that the public were going to buy out the masks that were needed by medical staff.
> Ivermectin
Same documentary, this was started by Musk. It does nothing and is dangerous.
The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?
Incidentally how I feel about React regardless of LLMs. Putting Claude on top is just one more incomprehensible abstraction.
Software CEOs think about this and rub their hands together thinking about all the labor costs they will save creating apps, without thinking one step further and realizing that once you don't need developers to build the majority of apps your would-be customers also don't need the majority of apps at all.
They can have an LLM build their own customized app (if they need to do something repeatedly, or just have the LLM one-off everything if not).
Or use the free app that someone else built with an LLM as most app categories race to the moatless bottom.
A door has been opened that cant be closed and will trap those who stay too long. Good luck!
I do use them, and I also still do some personal projects and such by hand to stay sharp.
Just: they can't mint any more "pre-AI" computer scientists.
A few outliers might get it and bang their head on problems the old way (which is what, IMO, yields the problem-solving skills that actually matter) but between:
* Not being able to mint any more "pre-AI" junior hires
And, even if we could:
* Great migration / Covid era overhiring and the corrective layoffs -> hiring freezes and few open junior reqs
* Either AI or executives' misunderstandings of it and/or use of it as cover for "optimization" - combined with the Nth wave of offshoring we're in at the moment -> US hiring freezes and few open junior reqs
* Jobs and tasks junior hires used to cut their teeth on to learn systems, processes, etc. being automated by AI / RPA -> "don't need junior engineers"
The upstream "junior" source for talent our industry needs has been crippled both quantitatively and qualitatively.
We're a few years away from a _massive_ talent crunch IMO. My bank account can't wait!
Yes, yes. It's analogous to our wizzardly greybeard ancestors prophesying that youngsters' inability to write ASM and compile it in their heads would bring end of days, or insert your similar story from the 90s or 2000s here (or printing press, or whatever).
Order of "dumbing down" effect in a space that one way or another always eventually demands the sort of functional intelligence that only rigorous, hard work on hard problems can yield feels completely different, though?
Just my $0.02, I could be wrong.
This is a non-study.
The fourth session, where they tested switching back, was about recall and re-engagement with topics from the previous sessions, not fresh unaided writing. They found that the LLM users improved slightly over baseline, but much less than the non-LLM users.
"While these LLM-to-Brain participants demonstrated substantial improvements over 'initial' performance (Session 1) of Brain-only group, achieving significantly higher connectivity across frequency bands, they consistently underperformed relative to Session 2 of Brain-only group, and failed to develop the consolidation networks present in Session 3 of Brain-only group."
The study also found that LLM-group was largely copy-pasting LLM output wholesale.
Original poster is right: LLM-group didn't write any essays, and later proved not to know much about the essays. Not exactly groundbreaking. Still worth showing empirically, though.
If you wrote two essays, you have more 'cognitive engagement' on the clock as compared to the guy who wrote one essay.
In other news: If you've been lifting in the gym for a week, you have more physical engagement than the guy who just came in and lifted for the first time.
Isn't the point of a lot of science to empirically demonstrate results which we'd otherwise take for granted as intuitive/obvious? Maybe in AI-literature-land everything published is supposed to be novel/surprising, but that doesn't encompass all of research, last I checked.
I want a life of leisure. I don’t want to do hard things anymore.
Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”
Also, monkey see, monkey speak: https://arxiv.org/abs/2409.01754
I hope you’re being facetious, as otherwise that’s a selfish view which will come back to bite you. If you live in a society, what other do and how they behave affects you too.
A John Green quote on public education feels appropriate:
> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.
https://en.wikipedia.org/wiki/The_Ego_and_Its_Own
There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.