This paper introduces a term and instantly defines it as a definitely biased thing that is definitely happening, then spends its entirety arguing against the strawman it built itself. Not a single sentence is spent actually arguing with the idea or any of its points (other than the “partial similarities” paragraph on page I just realized the pages aren’t even numbered).
In general, the terms “LLM-like” and “human-like” are used all over the place, and in contrast with each other, but they’re never actually defined. It all just seems more vibes-based than anything else.
And “treating the human cognitive process like it’s similar to the LLM cognitive process might lead to a society where epistemics turns into a discipline where plausibility is an acceptable substitute for empiricism” has got to be one of the most ridiculous notions I’ve ever read in a paper (ctrl+F “fifth pathway is epistemic” for the exact quote).
It’s certainly a paper, that’s factual. To make sure I understand the argument:
- Scientists create software inspired by how brain works.
- People realize it’s not all that far off.
- Many papers showing this, lots of research to make AI even more like brains.
Paper’s conclusion: “People stupid, this bad. All made up.”
Reading this feels like meeting someone who likes to hear themselves talk.
Don't be too hard on yourself. If you've never walked to the car wash, then you are probably not an LLM.
Here's the thing though, unlike the old brain=computer analogy, this one may actually have a little truth to it. Not that your whole brain is an LLM, or even that the language part of your brain is just an LLM, but the language part may indeed be functioning in a similar way to an LLM to extent that it:
- Uses a hierarchy (cortical patch-panel) of parallel processing steps
- Is prediction based
- Is largely (but not 100%) auto-regressive
- Isn't actually specialized for language
The same is going to be true for all of our cortical areas/functions. The cortex is pretty much the same everywhere (it's 6 layers of neurons with a specific layer-to-layer interconnect pattern), and is therefore going to work the same everywhere.
What your cortex has that an LLM doesn't, and therefore makes your language cortex much more capable than an LLM, is that it learns incrementally and continually, based on prediction failure. An LLM/Transformer also learns from prediction failure, but needs the LLMs whole "life history" (training set) to be present at the same time, presented over and over, and learns via a special training algorithm. Your cortex in contrast doesn't have any magical external trainer, so has to learn for itself, and might be considered as 1/2 inference network and 1/2 prediction feedback/learning network.
The other major difference between an LLM and your language cortex is that the LLM is 100% auto-regressive, while your language cortex also has external inputs that bias/control generation, so that you can talk about things you are experiencing and what is going on in your head, not just generate a self-predicting sequence of words.
a lot of control theory goes into steam train locomotive, you don't say it explicitly but the way people typically make the same reference they evoke the image of a simple steady state steam engine, and insinuate that people compared brains to whatever was the novelty. I don't think that was ever the case, and comparing the brain with the intricate feedback systems, regulators and more general control theory facets of an actual steam train locomotive is a lot more apt than people make it out to be as if people were comparing it to a simple steam engine proper (i.e. not a locomotive).
also if you look at lifeforms with and without brains, and lifeforms that do or don't do locomotion, there is a clear correlation between mammals, birds, reptiles, spiders, insects, ... which have brains and are motile, versus plants, fungi, ... which don't have brains and aren't significantly motile.
the moment you need to move (not just grow in this or that direction) you need a lot of things: muscle control, inverse kinematics, interpretation of the environment, speedy reactions, routing, planning, memory, ...
100 years ago, horror stories featured wizard-like scientists using electricity to perform magic. A few decades after that, it was nuclear fission. Then quantum mechanics decades after that.
I would argue that both are correct, because as McLuhen pointed out the things we build come to change the way we perceive the world.
"We become what we behold. We shape our tools, and thereafter our tools shape us." -- Father John Culkin on McLuhan
That said, LLM's were modeled on the human brain so the entire idea that we shouldn't compare ourselves to them is daft. They are similar to use because that is exactly what they're designed to be.
Regardless of the degree to which the human mind works like an LLM, my reductionist tendency has always imagined that the human mind will be found to be built from simple enough principles (but at scale, of course). In that regard, LLM as model for the human brain (or at least one aspect of it) is attractive to me. I admit it.
It's interesting that it's easier to construct the argument† that a mind like an LLM would have an easier time capturing mind as steam engine than a mind like a steam engine would have capturing mind as LLM.
†: come up with each token after the other that induces a graspable interpretation of a sequence of tokens representing a potential judgement
There's something really interesting here with Goguen Institutions. Also sometimes an argument just "clicks" into place fully-formed, rather than being generated token-by-token? Is that "knowing like a steam engine?"
Prompting the imagination (for example, writing prompts) was a thing before generative AI even emerged in a meaningful sense. And it does work.
This is one of the reasons why I do literate programming using org-mode. It's easy to lose track of what I was thinking when I wrote something, and what the original structure and goal was as I continue to write it. Org-mode helps me keep my thoughts in order in English and interlineate code in with them, then mash a key to spit out compilable source. I don't use it for everything, but it sure comes in handy when I do use it.
> When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs.
I think I experienced this when I learned about LLMs, chain of thought, thinking tokens, short-term memory context, and long-term memory context. I began applying these concepts to real life and reasoning about how our brains work as if these concepts described how our brains actually function. But maybe this is more akin to the Tetris effect?
People have been doing this since the invention of clockwork. Analogies are useful, even when they're utterly wrong, since they provide a perspective and that perspective is not necessarily wrong. Who knew?
A more insidious related pathology- marital induced projected LLMorphism... where your wife constantly accuses you of having the personality of a large language model.
My boss has started to verbalize like an LL lol. I can notice it is not intentional, I think getting exposed to a certain patterns repeatedly is causing some form of imprinting.
Kids, are more susceptible to unknowingly imprinting in their formative users, I wonder if a generation will grow up communicating like an LLM?
I think it's meaningless anyway. A calculator doesn't multiply numbers like a human does. The important part is to develop systems that can do many human tasks
Early LLMs typically tried to do multiplication "in their head" by recall.
Now most LLMs do multiplication using a tool call to a programming language, akin to a person reaching for a calculator rather than relying on a learned table or working the problem out mentally.
The high level comparison between what LLMs do and what humans do" for this example is fairly parallel.
Agreed. I think we, as humans, like to think in terms of various metaphors when it comes to how we perceive ourselves in the world ( for example, "I am not some sort of automaton/robot" when objecting to some boss way back when ).
Nothing new under the sun. When clocks and precision mechanics started in the 17th century, there was a tendency to view humans as "machines". Computers came, suddenly human brains are "computers". Now we're LLMs.
If scientists make green jelly that emits thoughtful judgements, humans will be compared to green jelly.
None of these analogies are entirely wrong, they're just incomplete.
Humans are similar to machines for example in that our bodies convert energy to do work through a series of pumps and pipes and sensors and actuators. Life is not animated by some magic force but instead operates under the same physical laws that machines use to function.
Looks like he mostly publish something about "social behavior".
This "paper", IMO, is just saying "Hey, I notice this is happening. This is why it could be interesting for social science researchers" with without any real research or result.
The idea that humans could "work like" LLMs (or vice versa) is very vague and can be stretched to say pretty much anything. It's a pointless question IMO. I don't think I do, but maybe I really do on the inside and my consciousness makes me think I don't! We don't know.
The author lightly touches on other ways humans have viewed cognition, “computationalism” as one, but somewhat brushes these aside as though LLMs are somehow a unique expression of this tendency. That seems unlikely to me but we’re pretty early days into the tech to start assuming and concluding every initial hot take on “AI is Doing $Thing”.
Especially when this particular thing is just one in a very long line of metaphors humans make to our own minds’ operations every time a new major technology comes to play a pervasive role in society. Computers, steam engines, even aqueducts were not immune to comparisons of thought flowing like water, funneled by deliberate intent, etc. And for some, a certain amount of hand wringing worry or even moral panic about “what it’s doing to us”, eg taking away critical thinking because “OMG calculators!”
> "LLMorphism may encourage objectification when people are seen as replaceable mechanisms or output-generating systems. However, LLMorphism does not necessarily involve using another person instrumentally. Its primary content is representational: it concerns how humans are conceptualized, not necessarily how they are exploited."
This is quite a scary truth. A year or two ago, I saw a person with a job where he wrote small articles for a website. The boss contacted him, asking if he wanted to become an AI-assisted writer instead for less money. "No," he said, wanting the full payments for his writing prowess. A week or two later, they canned him, and the website's articles nosedived in quality.
LLMs expand the supply of "competent" labor. After mass firings, the remaining workers, desperate for income, accept lower wages for AI-assisted roles. Wealth consolidates upward while wages race downward.
So I think LLMorphism might tie closely to exploitation. Mass firings and lower salaries going around while the 0.01% of machine-learning companies consolidate wealth by servicing numerous roles autonomously in some cases and by reducing salaries due to the larger body of "qualified" workers who can technically finish the job despite not having qualified in the past.
> "LLMorphism is also distinct from predictive processing and related Bayesian theories of cognition. Predictive processing holds that the brain continuously generates predictions about sensory input and updates internal models in light of prediction error (Clark, 2013; Friston, 2010; Hohwy, 2013). But predictive processing does not imply that humans are LLM-like, nor that human understanding is merely text generation. Indeed, many predictive-processing accounts are deeply embodied and action-oriented (Allen & Friston, 2018; Clark, 2015; Pezzulo et al., 2024)."
I agree wholeheartedly here, because neural networks (NN) are stateless functions usually (not stuff like recurrent ones). On the one hand, with an infinitely fast computer, you retrieve the answer instantly. Brains, on the other hand, have neurons that communicate with signal delay. I bet if, in a weird world, we could simulate a brain with zero delay, a mind would cease to function correctly. Plus, neurons accumulate charge steadily before firing to nearby neurons. With NNs, you simply add up all the numbers, the "charge," and the ReLU function (or sigmoid for old-school machine-learning researchers) instantly "simulate" a neuron firing off to neurons connected to it.
> "and and"
Just a heads up, you have a typo here.
> "LLMorphism may therefore make fluency appear sufficient for understanding and, in doing so, devalue expertise and weaken educational norms."
I have heard the horror stories that youngsters these days are attached to screens with less ability to focus, but I'm not scared of that claim yet. For every generation, there have been those who kick the can down the road, skirting responsibilities, and all that changes with the generation is the activity: Instead of kicking a can down the road, they slide their finger across their phone's screen. The real test is tracking how many students across HS are in AP courses, learning Newtonian mechanics, electromagnetism, and of course, calculus among a couple others. Is that number dropping relative to the 90s and the aughts? Is it roughly the same as a percent of students? Or is it even going up, perhaps LLMs helping some types of learners explore topics to help them qualify for AP coursework? Now, if the percent is nosediving, then* I will be terrified for what the future holds for them and for me.
> "clinicians also rely on how patients appear. Research on clinical communication shows that nonverbal behaviour is central to physician–patient interaction, including the expression of emotion, empathy, distress, and relational understanding"
LLMs are becoming multimodal with pictures "understood." No reason LLMs won't catch these non-verbal signals in the future that I can think up.
> "The risk may be particularly acute in mental health, where suffering can be difficult to articulate and where coherent self-description does not always track clinical severity; behavioral and nonverbal signs such as psychomotor retardation, agitation, facial expression, vocal dynamics, and posture can provide clinically relevant information beyond verbal report (Dibeklioğlu et al., 2015)"
This is a great point, because a lot of people with schizophrenia and bipolar disorder with psychotic features suffer from anosognosia, the state of not knowing they have a medical condition.
> "In this sense, LLMorphism may contribute to a broader epistemic shift: from evaluating whether claims are grounded, justified, and accountable, to evaluating whether they are coherent, fluent, and plausible."
Grifters have always weaponized confident fluency over evidence. Anti-science plagues America right now. Some gullible few absorb the message that ivory-tower elites intentionally block heterodox research that is a paradigm shift, sowing seeds of doubt about academia. For example, I saw a doctor's YT channel that claimed high cholesterol isn't necessarily bad and that statins should be avoided all while recommending saturated fats over seed oils. Of course, he sells a book with his "suppressed" knowledge alongside having an online market selling US$90/month supplements that his book recommends. They claim academics keep them out of the journals out of self-preservation since the "paradigm shift" would cause their grants to go bye-bye.
In reality, these charlatans combine cherry-picking of low-quality studies, telling a good story of the underdog fighting the establishment, and ignoring the body of evidence in support of the current expert consensus. Their grift is so illogical as if researchers wouldn't love to spark up a paradigm shift, becoming semi-famous and making more money, as if research isn't done decentralized across many countries funded by charities, different governments, and different corporations in competition with each other. Collusion without whistleblowers is simply impossible. Also, there's a difference between the corporate arm of medicine where they've been sued for billions before versus researchers who just follow the evidence to advance their research career and help everyone on the planet. Trust in expert consensus when it's this independent and decentralized and financed from all over the place with zero reason for an ulterior motive. They also pull off the, "Science has been wrong in the past." like Mac from It's always Sunny in Philadelphia. Science is in a state of constant flux where new evidence comes in, and the best guess, explaining as much evidence as possible right now, might change.
> "Early childhood education is organized around relational pedagogy, attachment, affect regulation, and development (Cliffe & Solvanson, 2023)."
One aspect here is, mass-produced cartoons for kids teach aplenty and do a decent job at it. I'm not convinced, in two decades from now, we won't have human-looking cyborgs doing teaching like this.
> "The broader point, however, is that public debate on AI has focused mainly on anthropomorphism: whether we are giving too much mind to machines."
This part reminds me of some recent research out of Anthropic. They uncovered that a few hundred vectors in their activation space linked up to concrete emotional states. They dubbed them functional emotions while warning these have nothing to do with subjective experience of sentience. That paper had fantastic details in it, though. They tested things by adding a big magnitude to a particular functional emotion, running some tests, and seeing how its behavior changed.
When "desperate," it not only hallucinated more as if it "felt" it must answer something, but it reward hacked more often. In a simulated situation, "desperate" Claude Opus blackmailed ~80% of the time whereas regular Opus did so ~20% while "calm" Opus did so ~0% (likely not zero, but they ran too few iterations of the test to approximate the probability).
When curious / interested, it altered how it searched through the solution space by considering more options. It even went deeper into a promising solution before ending its calculations when allowed to do so.
In general, the terms “LLM-like” and “human-like” are used all over the place, and in contrast with each other, but they’re never actually defined. It all just seems more vibes-based than anything else.
And “treating the human cognitive process like it’s similar to the LLM cognitive process might lead to a society where epistemics turns into a discipline where plausibility is an acceptable substitute for empiricism” has got to be one of the most ridiculous notions I’ve ever read in a paper (ctrl+F “fifth pathway is epistemic” for the exact quote).
Paper’s conclusion: “People stupid, this bad. All made up.”
Reading this feels like meeting someone who likes to hear themselves talk.
Here's the thing though, unlike the old brain=computer analogy, this one may actually have a little truth to it. Not that your whole brain is an LLM, or even that the language part of your brain is just an LLM, but the language part may indeed be functioning in a similar way to an LLM to extent that it:
- Uses a hierarchy (cortical patch-panel) of parallel processing steps
- Is prediction based
- Is largely (but not 100%) auto-regressive
- Isn't actually specialized for language
The same is going to be true for all of our cortical areas/functions. The cortex is pretty much the same everywhere (it's 6 layers of neurons with a specific layer-to-layer interconnect pattern), and is therefore going to work the same everywhere.
What your cortex has that an LLM doesn't, and therefore makes your language cortex much more capable than an LLM, is that it learns incrementally and continually, based on prediction failure. An LLM/Transformer also learns from prediction failure, but needs the LLMs whole "life history" (training set) to be present at the same time, presented over and over, and learns via a special training algorithm. Your cortex in contrast doesn't have any magical external trainer, so has to learn for itself, and might be considered as 1/2 inference network and 1/2 prediction feedback/learning network.
The other major difference between an LLM and your language cortex is that the LLM is 100% auto-regressive, while your language cortex also has external inputs that bias/control generation, so that you can talk about things you are experiencing and what is going on in your head, not just generate a self-predicting sequence of words.
also if you look at lifeforms with and without brains, and lifeforms that do or don't do locomotion, there is a clear correlation between mammals, birds, reptiles, spiders, insects, ... which have brains and are motile, versus plants, fungi, ... which don't have brains and aren't significantly motile.
the moment you need to move (not just grow in this or that direction) you need a lot of things: muscle control, inverse kinematics, interpretation of the environment, speedy reactions, routing, planning, memory, ...
Magical thinking will always live in the new.
"We become what we behold. We shape our tools, and thereafter our tools shape us." -- Father John Culkin on McLuhan
That said, LLM's were modeled on the human brain so the entire idea that we shouldn't compare ourselves to them is daft. They are similar to use because that is exactly what they're designed to be.
Regardless of the degree to which the human mind works like an LLM, my reductionist tendency has always imagined that the human mind will be found to be built from simple enough principles (but at scale, of course). In that regard, LLM as model for the human brain (or at least one aspect of it) is attractive to me. I admit it.
†: come up with each token after the other that induces a graspable interpretation of a sequence of tokens representing a potential judgement
This is one of the reasons why I do literate programming using org-mode. It's easy to lose track of what I was thinking when I wrote something, and what the original structure and goal was as I continue to write it. Org-mode helps me keep my thoughts in order in English and interlineate code in with them, then mash a key to spit out compilable source. I don't use it for everything, but it sure comes in handy when I do use it.
I think I experienced this when I learned about LLMs, chain of thought, thinking tokens, short-term memory context, and long-term memory context. I began applying these concepts to real life and reasoning about how our brains work as if these concepts described how our brains actually function. But maybe this is more akin to the Tetris effect?
Kids, are more susceptible to unknowingly imprinting in their formative users, I wonder if a generation will grow up communicating like an LLM?
Now most LLMs do multiplication using a tool call to a programming language, akin to a person reaching for a calculator rather than relying on a learned table or working the problem out mentally.
The high level comparison between what LLMs do and what humans do" for this example is fairly parallel.
Actually, this happens already in a modular way AFAIK…
I don’t think this way of thinking started with LLM. Does Systems Based Thinking also attribute too little mind to humans?
If scientists make green jelly that emits thoughtful judgements, humans will be compared to green jelly.
Humans are similar to machines for example in that our bodies convert energy to do work through a series of pumps and pipes and sensors and actuators. Life is not animated by some magic force but instead operates under the same physical laws that machines use to function.
Looks like he mostly publish something about "social behavior".
This "paper", IMO, is just saying "Hey, I notice this is happening. This is why it could be interesting for social science researchers" with without any real research or result.
Especially when this particular thing is just one in a very long line of metaphors humans make to our own minds’ operations every time a new major technology comes to play a pervasive role in society. Computers, steam engines, even aqueducts were not immune to comparisons of thought flowing like water, funneled by deliberate intent, etc. And for some, a certain amount of hand wringing worry or even moral panic about “what it’s doing to us”, eg taking away critical thinking because “OMG calculators!”
This is quite a scary truth. A year or two ago, I saw a person with a job where he wrote small articles for a website. The boss contacted him, asking if he wanted to become an AI-assisted writer instead for less money. "No," he said, wanting the full payments for his writing prowess. A week or two later, they canned him, and the website's articles nosedived in quality.
LLMs expand the supply of "competent" labor. After mass firings, the remaining workers, desperate for income, accept lower wages for AI-assisted roles. Wealth consolidates upward while wages race downward.
So I think LLMorphism might tie closely to exploitation. Mass firings and lower salaries going around while the 0.01% of machine-learning companies consolidate wealth by servicing numerous roles autonomously in some cases and by reducing salaries due to the larger body of "qualified" workers who can technically finish the job despite not having qualified in the past.
> "LLMorphism is also distinct from predictive processing and related Bayesian theories of cognition. Predictive processing holds that the brain continuously generates predictions about sensory input and updates internal models in light of prediction error (Clark, 2013; Friston, 2010; Hohwy, 2013). But predictive processing does not imply that humans are LLM-like, nor that human understanding is merely text generation. Indeed, many predictive-processing accounts are deeply embodied and action-oriented (Allen & Friston, 2018; Clark, 2015; Pezzulo et al., 2024)."
I agree wholeheartedly here, because neural networks (NN) are stateless functions usually (not stuff like recurrent ones). On the one hand, with an infinitely fast computer, you retrieve the answer instantly. Brains, on the other hand, have neurons that communicate with signal delay. I bet if, in a weird world, we could simulate a brain with zero delay, a mind would cease to function correctly. Plus, neurons accumulate charge steadily before firing to nearby neurons. With NNs, you simply add up all the numbers, the "charge," and the ReLU function (or sigmoid for old-school machine-learning researchers) instantly "simulate" a neuron firing off to neurons connected to it.
> "and and"
Just a heads up, you have a typo here.
> "LLMorphism may therefore make fluency appear sufficient for understanding and, in doing so, devalue expertise and weaken educational norms."
I have heard the horror stories that youngsters these days are attached to screens with less ability to focus, but I'm not scared of that claim yet. For every generation, there have been those who kick the can down the road, skirting responsibilities, and all that changes with the generation is the activity: Instead of kicking a can down the road, they slide their finger across their phone's screen. The real test is tracking how many students across HS are in AP courses, learning Newtonian mechanics, electromagnetism, and of course, calculus among a couple others. Is that number dropping relative to the 90s and the aughts? Is it roughly the same as a percent of students? Or is it even going up, perhaps LLMs helping some types of learners explore topics to help them qualify for AP coursework? Now, if the percent is nosediving, then* I will be terrified for what the future holds for them and for me.
> "clinicians also rely on how patients appear. Research on clinical communication shows that nonverbal behaviour is central to physician–patient interaction, including the expression of emotion, empathy, distress, and relational understanding"
LLMs are becoming multimodal with pictures "understood." No reason LLMs won't catch these non-verbal signals in the future that I can think up.
> "The risk may be particularly acute in mental health, where suffering can be difficult to articulate and where coherent self-description does not always track clinical severity; behavioral and nonverbal signs such as psychomotor retardation, agitation, facial expression, vocal dynamics, and posture can provide clinically relevant information beyond verbal report (Dibeklioğlu et al., 2015)"
This is a great point, because a lot of people with schizophrenia and bipolar disorder with psychotic features suffer from anosognosia, the state of not knowing they have a medical condition.
> "In this sense, LLMorphism may contribute to a broader epistemic shift: from evaluating whether claims are grounded, justified, and accountable, to evaluating whether they are coherent, fluent, and plausible."
Grifters have always weaponized confident fluency over evidence. Anti-science plagues America right now. Some gullible few absorb the message that ivory-tower elites intentionally block heterodox research that is a paradigm shift, sowing seeds of doubt about academia. For example, I saw a doctor's YT channel that claimed high cholesterol isn't necessarily bad and that statins should be avoided all while recommending saturated fats over seed oils. Of course, he sells a book with his "suppressed" knowledge alongside having an online market selling US$90/month supplements that his book recommends. They claim academics keep them out of the journals out of self-preservation since the "paradigm shift" would cause their grants to go bye-bye.
In reality, these charlatans combine cherry-picking of low-quality studies, telling a good story of the underdog fighting the establishment, and ignoring the body of evidence in support of the current expert consensus. Their grift is so illogical as if researchers wouldn't love to spark up a paradigm shift, becoming semi-famous and making more money, as if research isn't done decentralized across many countries funded by charities, different governments, and different corporations in competition with each other. Collusion without whistleblowers is simply impossible. Also, there's a difference between the corporate arm of medicine where they've been sued for billions before versus researchers who just follow the evidence to advance their research career and help everyone on the planet. Trust in expert consensus when it's this independent and decentralized and financed from all over the place with zero reason for an ulterior motive. They also pull off the, "Science has been wrong in the past." like Mac from It's always Sunny in Philadelphia. Science is in a state of constant flux where new evidence comes in, and the best guess, explaining as much evidence as possible right now, might change.
> "Early childhood education is organized around relational pedagogy, attachment, affect regulation, and development (Cliffe & Solvanson, 2023)."
One aspect here is, mass-produced cartoons for kids teach aplenty and do a decent job at it. I'm not convinced, in two decades from now, we won't have human-looking cyborgs doing teaching like this.
> "The broader point, however, is that public debate on AI has focused mainly on anthropomorphism: whether we are giving too much mind to machines."
This part reminds me of some recent research out of Anthropic. They uncovered that a few hundred vectors in their activation space linked up to concrete emotional states. They dubbed them functional emotions while warning these have nothing to do with subjective experience of sentience. That paper had fantastic details in it, though. They tested things by adding a big magnitude to a particular functional emotion, running some tests, and seeing how its behavior changed.
When "desperate," it not only hallucinated more as if it "felt" it must answer something, but it reward hacked more often. In a simulated situation, "desperate" Claude Opus blackmailed ~80% of the time whereas regular Opus did so ~20% while "calm" Opus did so ~0% (likely not zero, but they ran too few iterations of the test to approximate the probability).
When curious / interested, it altered how it searched through the solution space by considering more options. It even went deeper into a promising solution before ending its calculations when allowed to do so.