In any case, the short answer is "No!". There is a LOT written about language and I find it difficult to believe that most ANY idea presented is really new.
For example, have these guys run their ideas past Schank's "conceptual dependency" theory?
The article presents the fact that we appear to treat non-constituents (eg “in the middle of the”) as “units” to mean that language is more like “snapping legos together” than “building trees.”
But linguists have proposed the possibility that we store “fragments” to facilitate reuse—essentially trees with holes, or equivalently, functions that take in tree arguments and produce tree results. “In the middle of the” could take in a noun-shaped tree as an argument and produce a prepositional phrase-shaped tree as a result, for instance. Furthermore, this accounts for the way we store idioms that are not just contiguous “Lego block” sequences of words (like “a ____ and a half” or “the more ___, the more ____”). See e.g. work on “fragment grammars.”
Can’t access the actual Nature Human Behavior article so perhaps it discusses the connections.
Unless you’re referring to academic paper, I’m not getting a pay wall.
I read the article (but not the paper), but it doesn’t sound like a no. But I also don’t find the claim that surprising given in other languages word matters a lot less.
If the question you're answering is the one posed by the Scitechdaily headline, "Have We Been Wrong About Language for 70 Years?", you might want to work a bit on resistance to clickbait headlines.
The strongest claim that the paper in question makes, at least in the abstract (since the Nature article is paywalled), is "This poses a challenge for accounts of linguistic representation, including generative and constructionist approaches." That's certainly plausible.
Conceptual dependency focuses more on semantics than grammar, so isn't really a competing theory to this one. Both theories do challenge how language is represented, but in different ways that don't really overlap that much.
It's also not as if conceptual dependency is some sort of last word on the subject when it comes to natural language in humans - after all, it was developed for computational language representation, and in that respect LLMs have made it essentially obsolete for that purpose.
Meanwhile, the way LLMs do what they do isn't well understood, so we're back to needing work like the OP to try to understand it better, in both humans and machines.
In any case, the short answer is "No!". There is a LOT written about language and I find it difficult to believe that most ANY idea presented is really new.
For example, have these guys run their ideas past Schank's "conceptual dependency" theory?
But linguists have proposed the possibility that we store “fragments” to facilitate reuse—essentially trees with holes, or equivalently, functions that take in tree arguments and produce tree results. “In the middle of the” could take in a noun-shaped tree as an argument and produce a prepositional phrase-shaped tree as a result, for instance. Furthermore, this accounts for the way we store idioms that are not just contiguous “Lego block” sequences of words (like “a ____ and a half” or “the more ___, the more ____”). See e.g. work on “fragment grammars.”
Can’t access the actual Nature Human Behavior article so perhaps it discusses the connections.
I read the article (but not the paper), but it doesn’t sound like a no. But I also don’t find the claim that surprising given in other languages word matters a lot less.
If the question you're answering is the one posed by the Scitechdaily headline, "Have We Been Wrong About Language for 70 Years?", you might want to work a bit on resistance to clickbait headlines.
The strongest claim that the paper in question makes, at least in the abstract (since the Nature article is paywalled), is "This poses a challenge for accounts of linguistic representation, including generative and constructionist approaches." That's certainly plausible.
Conceptual dependency focuses more on semantics than grammar, so isn't really a competing theory to this one. Both theories do challenge how language is represented, but in different ways that don't really overlap that much.
It's also not as if conceptual dependency is some sort of last word on the subject when it comes to natural language in humans - after all, it was developed for computational language representation, and in that respect LLMs have made it essentially obsolete for that purpose.
Meanwhile, the way LLMs do what they do isn't well understood, so we're back to needing work like the OP to try to understand it better, in both humans and machines.