The article is basically saying that the authors of the Stochastic Parrot article (and their adherents) assume that humans have some sort of special magic that cannot be simulated using machine learning, and then constantly move task goalposts after language models reach human-level at that task.
Congratulations to the authors for making a great joke paper.
https://claude.ai/share/963b66a7-930c-47a6-a4ea-d7e6993347fa
You can find the reference-paper also on Vixra: https://ai.vixra.org/abs/2506.0049
The article is basically saying that the authors of the Stochastic Parrot article (and their adherents) assume that humans have some sort of special magic that cannot be simulated using machine learning, and then constantly move task goalposts after language models reach human-level at that task.
Congratulations to the authors for making a great joke paper.