Where Is GPT in the Chomsky Hierarchy?

(fi-le.net)

53 points | by fi-le 5 days ago

13 comments

  • PaulHoule 11 hours ago
    The Chomsky Hierarchy isn't the right way to think about, in fact the Chomsky paradigm probably held back progress in language technology over the past 70 years.

    In the Kuhnian sense, Syntactic Structures was the vanguard of a paradigm shift that let linguists do "normal science" in terms of positing a range of problems and writing papers about them. It was also useful in computer science for formally defining computer languages that are close enough to natural languages that people find them usable.

    On the other hand, attempts to use the Chomsky theory to develop language technology failed miserably outside of very narrow domains and in the real world Markov models and conditional random fields often outperformed when the function was "understanding", "segmentation", etc.

    From an engineering standpoint, the function that tells if a production is in the grammar that is central to the theory is not so interesting for language technology, I mean, if LLMs were -centric then an LLM would go on strike if you got the spelling or grammar wrong or correct you in a schoolmarmish William Safire way but rather it is more useful for LLMs to silently correct for obvious mistakes the same way that real language users do.

    The other interesting thing is the mixing of syntax and semantics. For a long time I believed the "Language Instinct" idea of Pinker in which the ability to serialize and deserialize language is a peripheral that is attached to an animal and you need the rest of the animal and even the surrounding world to have "human" competence. Now LLMs come around and manage to show a lot of linguistic competence without the rest of the animal and boy we have egg on our face, and coming out of the shadow of the Chomsky theory, structuralism will rear it's head again. (e.g. "X is structured like a language" got hung up on the unspoken truth that "language is not structured like a language")

    • nabla9 9 hours ago
      You saw name "Noam Chomsky" and that started a process in your mind that generated the standard spiel about Syntactic Structures.

      Chomsky Hierarchy is his more fundamental work that joins computer science and linguistics. It was published in IRE Transactions on Information Theory. Chomsky, Noam (1956). "Three models for the description of language" https://chomsky.info/wp-content/uploads/195609-.pdf

      Type-3 grammar ≡ finite-state automaton

      Type-2 grammar ≡ Non-deterministic pushdown automaton

      Type-1 grammar ≡ Linear-bounded non-deterministic Turing machine

      Type-0 grammar ≡ Turing machine

      ps. Chomsky was already aware of Finite State Automata and Turing Machines and understood that they match Type-3 and Type-0. Pushdown Automata was invented later, and connection between Type-1 grammar and Linear Bounded Automata was made few years later.

      • yongjik 7 hours ago
        Chomsky Hierarchy pertains to "languages" defined in the theory of computation: i.e., it is a subset of the set of all finite sequence of alphabets (for some fixed notion of "alphabets"). If a sentence (a particular finite sequence of alphabets) is in the subset, then it is a "valid" sentence of the language. Otherwise it is invalid.

        It should be already clear from this that this notion of language is rather different from natural languages. For example, if there is a formal language that contains "Good morning" and "My hovercraft is full of eels" as valid sentences, then nothing distinguishes these sentences any more. (Of course you could add annotations and build semantic values but they are not essential to the discussion of formal languages.)

        It gets a bit more ridiculous when you try to connect LLMs to the Chomsky hierarchy. Modern LLMs do not really operate on the principle of "is this a valid sentence?" yet provide vastly superior results when it comes to generating naturally sounding sentences.

        I think LLMs have put an end to any hope that formal language theory (in the style of Chomsky Hierarchy) will be relevant to understanding human languages.

      • xg15 9 hours ago
        I think it could be useful to combine the two paradigms to maybe get a better understanding of what transformers can and cannot learn.

        E.g. would it be possible to create an algorithm that takes a grammar (and maybe a desired context window size) as input and constructs a transformer network that generates sentences exactly from that grammar?

        ("Construct" meaning directly setting the weights, without any iterative training process)

        • nabla9 9 hours ago
          They are combined. Chomsky Hierarchies are the core of modern Computer science because they map perfectly into automata theory. They are always taught together in computer science.

          >E.g. would it be possible to create an algorithm that takes a grammar (and maybe a desired context window size) as input and constructs a transformer network that generates sentences exactly from that grammar?

          You don't need transformers for what you describe. That's 101 theory of computation class where you learn about automata, grammars, parsers, and generators.

          • xg15 8 hours ago
            Yeah, I know the theory of formal grammars and automata that the Chomsky hierarchy is part of. What I meant is that language models and specifically transformer networks are usually entirely separate from that theory, so it would be useful to build a bridge between "modern" language processing using GPTs/LLMs and the classical formal theory.

            The most obvious overlap in usage is with programming languages: LLMs can parse and generate code in formal languages, but their processing model is completely different from syntax trees and parsers. So the question is, how do they store the formal structure of a programming language and could this be mapped back in any way to a grammar or automaton?

            • PaulHoule 5 hours ago
              The way I see it is that attention is graph structured -- this token here is connected to that token there and so forth by the attention lighting up or also in the sense that there are a bunch of places in the document where people are talking about "Noam" or "Chomsky" or "Noam Chomsky" or "The Author" or "him", etc.

              Alternately if you were looking it from a semantic web perspective the knowledge expressed in a document is a graph and that graph structure is more fundamental than the tree structure of a text because you could express the same knowledge in different orders. Serialization fundamentally requires putting things in some specific order which might specifically be chronological (work from 2005 to the present as a play with many acts) or could be organized around some conceptual hierarchy (kitsune legends, self psychology, character acting, animal and human behavior and physiology, ...) or the minimization or elimination of backward references (whatever it is that the C spec does a touch wrong but post-Common Lisp specs do right) , etc. Ultimately the graph is pruned away into a tree where the remaining links are denoted by syntactic features in the local scale of the document, you're kinda left filling in the rest of the links with some combination of pragmatics, logical inference, something like SAT solving, etc,

              A conventional parsing point of view sees a Java program as a tree but for ordinary purposes it does not matter what order you put the fields and methods in and even though procedural programs are allegedly a sequence of operations done in a certain order it is frequently the case that it does not matter at all if you run line 71 or line 75 first so it is often the case that the graph is the real thing and the trees that we're so comfortable with are the shadows on the walls of the cave.

      • Legend2440 9 hours ago
        The trouble is that english doesn’t fit neatly into any of these categories. It has features that make it at least a context-free language, but can’t handle other features of context-free languages like unlimited nesting.

        Ultimately these are categories of formal languages, and natural language is an entirely different kind of thing.

        • nabla9 8 hours ago
          Strictly speaking natural languages fit into Context-Sensitive (Type-1) in Chomsky Hierarchy, but that's too broad to be useful.

          In practice they are classified into MCSL (Mildly Context-Sensitive) subcategory defined by Aravind K. Joshi.

          • krnlclnl 8 hours ago
            Sure, if you accept and agree with Joshi.

            No reason to do that though, except to validate some random persons perspective on language. The sky will not open and smash us with a giant foot if we reject such an obligation.

    • czzr 10 hours ago
      I don’t think LLMs contradict the Pinker description - it just turns out that the output stream embodies a lot of information, and you can construct a model (of some reasonable fidelity) of the original sources from sufficient examples of the output stream.
    • jesuslop 4 hours ago
      In a complementary story, from light reading I gather that Zellig Harris, that advised Chomsky's dissertation, contributed decisively to distributional semantics, a forerunner of word embeddings, in turn a key to the door of neural NLP. The ML guys had a shiny toolbox ready to use for decades. Had it been more fashionable...

      Shalgren, Distributional Legacy: The Unreasonable Effectiveness of Harris’s Distributional Program. https://doi.org/10.1080%2F00437956.2024.2414515

    • ogogmad 11 hours ago
      I think most PL parsers use recursive descent, which doesn't require any formal language theory. That said, it's possible that formal language theory has enabled regular expressions to enter widespread use - first via theory; then to make tokenisers; then included in text editors; and then to make GREP.
      • Maxatar 10 hours ago
        You're going to have a very hard time parsing a language using recursive descent if you don't understand some formal language theory. Recursive descent algorithms have a lot of trouble with left-recursion, and can result in arbitrary backtracking if you're not careful, which can blow up memory and time. To fix left-recursion you need to rewrite your grammar, which means familiarity with formal language theory. To avoid backtracking you typically use an LL(N) or variations thereof, which once again require having a pretty good understanding of formal language theory.
        • FeepingCreature 10 hours ago
          Nah. Source: have done it without formal language theory. You can do most things by printf debugging and kludging.

          (Left-recursion? Use a while loop! Mutable state is a pathway to many abilities functional programmers consider unnatural.)

          • PaulHoule 9 hours ago
            You can definitely write parsers and lexers knowing just enough to be dangerous.

            I remember being in the PHP culture where people wrote regex-based hacks instead of the character-at-a-time or token-at-a-time state machines or state machine + a stack kind of things which would be too slow in a language like that.

            The less you know the faster you get in trouble when things get tough.

          • Maxatar 9 hours ago
            My statement was specific to recursive descent parsing.
            • FeepingCreature 4 hours ago
              But recursive descent parsing is implemented with functions operating on parser state. The whole point of it is that it's very easy to write a parser because the abstraction exactly matches to function calling in programming languages. So you can do the whole range of pl primitives too.
    • suddenlybananas 9 hours ago
      > For a long time I believed the "Language Instinct" idea of Pinker in which the ability to serialize and deserialize language is a peripheral that is attached to an animal and you need the rest of the animal and even the surrounding world to have "human" competence.

      Yeah you just need 10 trillion tokens of data.

    • DiscourseFan 10 hours ago
      Don't worry, the LLMs are already post-structuralist (at least the ones working on them are often familiar with those thinkers). Which is incredibly disturbing, I think, for many linguists who have spent their entire careers fighting against the French, only to have such philosophical and intellectual work become the most important in the development of this new technology. Hey, at least I'm employed! (And they said it "wasn't serious," that Chomskian Linguistics was the "real science"!)
      • lukev 10 hours ago
        Well, technically, they're more structuralist than post-structuralist. Arguably they even prove that Saussure's original structuralist hypothesis was correct.
        • PaulHoule 5 hours ago
          It frustrates me to no end because I can't help but thing Saussure is a charlatan, and that's a person who loves the early Foucault and Badiou and Baudrillard [1] but I'm not so sure about Dubord and Barthes and Derrida.

          [1] ... as a guilty pleasure not in the "slumming" sense of Backstabbed in a Backwater Dungeon but that I feel that he, like Erving Goffman, brings out the spiritual desolation in me

        • DiscourseFan 9 hours ago
          I really do not like posting anything about my work on this website. Not only am I extremely constrained in what I can say, many engineers seem to vilify any attempt to legitimate the business. If I could speak freely there'd be no end to what I would tell you to the contrary.
          • lukev 7 hours ago
            Well, I'm very interested in this, and if you have things you wish you could speak freely about I can be reached directly via several mediums. My name is not hard to find and I'm on Bluesky & LinkedIn if you want to reach out to me there. I'm one of the authors of this book if you want my full name to search: https://www.amazon.com/Practical-Clojure-Experts-Voice-Sourc...
  • pohl 11 hours ago
    Regular...albeit astronomically large (unless we're granting idealizations like infinite context, etc.)
    • Netcob 9 hours ago
      Exactly, same as all real-world computers.

      Although to be fair, nothing above regular (that I'm aware of, it's been a while) requires infinite space, just unlimited space... you can always make a real Turing machine as long as you keep adding more tape whenever it needs it.

      • pohl 9 hours ago
        Yeah, meant to say unbounded
  • guerrilla 10 hours ago
    This is a trivial category mistake.
  • ursAxZA 6 hours ago
    Honest question: if this hierarchy isn’t suitable for comparing humans with humans, what does it mean to apply it to machines? What, exactly, is being measured?

    ───

    I’m not arguing it’s useless — I’m trying to understand its scope here.

  • aghilmort 11 hours ago
    there’s decent work on computational reasoning power of transformers, SSMs, etc.

    some approximate snippets that come to mind are that decoder-only transformers recognize AC^0 and think in TC^0, that encoder-decoders are strictly more powerful than decoder-only, etc.

    Person with last name Miller iric if poke around on arXiv, a few others, been a while since was current top of mind so ymmv on exact correctness of above snippets

    • ctoa 11 hours ago
      You are probably thinking of Merrill (whose work is referenced towards the end of the article).
  • weinzierl 9 hours ago
    "This covers practically all programming languages, because they conform to a syntax tree where each branch is an application of such a production rule."

    Except that practically all practically used programming languages are not context-free.

    • xg15 8 hours ago
      Is that so?

      Programming languages usually have a number of "higher-level semantic layers" on top of the syntax tree that impose further constrains - e.g. that identifiers must have consistency between usage and definition, that all expressions are valid according to the type system, etc.

      But all that stuff happens after an AST is created and so doesn't really affect the basic grammar.

      I think if you ignore those higher-level constraints, a number of languages have valid context-free grammars.

    • fi-le 9 hours ago
      Thank you, corrected!
  • Animats 9 hours ago
    > Let's grant our GPT an infinite context-window.

    No. If you have infinite memory, you get into decidability issues, which the author does. Lots of people have been down this rathole. Penrose is probably the most prominent. It doesn't help.

    The real issues are combinatoric. How hard is this? Is there an upper bound? Is this one of those many problems where the worst case limit is exponential but the average limit is much lower? Or where a "good enough" solution is far cheaper computationally than a perfect one. The Traveling Salesman problem is the classic example, but it comes up in other optimization contexts, such as the simplex method of linear programming.

  • babypistol 9 hours ago
    What is up with this argument that LLMs cannot be Turing complete?

    > There is a simple argument showing that GPTs, even with an infinite context window, cannot be Turing complete. Since the vocabulary size is finite, the rows of the embedding matrix are bounded, meaning each embedding is in a compact subset of Rn. By Tychonoff's Theorem, the product of the compact spaces the input embeddings live in is also compact. Because the transformer architecture does a bounded number of continuous operations, the output probabilities are bounded from above and below. In particular, the probability of emitting the end-of-sequence token (the special token that halts the completion) is bounded from below by some constant, so the probability of stopping in finite time is 1.

    Even if the conclussion might be correct, the argument is pure nonsense.

    First of all, why do we need to invoke compactness and Tychonoff's Theorem? Any serious discussion of computability must avoid discussing real numbers. And furthermore, here it is completely unnecessary. All sets are actually finite, so any mention of compactness is just fancy wording for finite in this case.

    Second, a probability argument doesn't prove anything about turing completeness. Realistically we must consider LLM's as deterministic machines. The fact that we like to interpret the output of an ML model doesn't have anything to do with actual probability of what happens during execution of an LLM (again, it ia deterministic).

    Third, even if we accept the probability argument, we can easily resolve it by just removing the end-of-sequence token and guarantee that the process never stops.

    I think the real argument against Turing completeness of GPTs (even with infinite context) would be more along the lines of the fact that it must operate with finite number representations. And a consequence of this would be that it maps an infinite context into a finite set of states, which must result in periodic output at some point.

    Finally, as a meta-point, why do so many people enjoy making such quasi-mathematical arguments? Often invoking theorems which obviously can't apply, if they just took a bit of time to clearly define the question. To me it seems like ML people know just enough math to notice some superficial correspondence, but lack the rigor to be able to trully evaluate it.

  • Maxatar 11 hours ago
    So unless I'm mistaken, the TL;DR is that GPTs inherently can not be Turing complete because they always terminate, ie. there is always a non-zero probability of the end-of-token character to be generated.

    Seems like a fair argument to raise, although not too insightful.

    As a nitpick, it really is not the case that most programming languages are context-free, but I can sympathize with what the author is trying to get at. Most programming languages have a context-free superset that is useful to use as one of many stages in the parsing pipeline, before type checking and name resolution etc... but apart from perhaps some dynamically typed programming languages, the vast majority of programming languages are context-sensitive, not context-free.

    • onraglanroad 11 hours ago
      > GPTs inherently can not be Turing complete because they always terminate

      I'm not sure that was intended entirely seriously. After all, humans always terminate too...

      • bigfishrunning 10 hours ago
        To follow on logically, maybe humans aren't Turing complete? we are not equivalent to Turing machines...
        • FeepingCreature 10 hours ago
          No machine instantiated in the physical universe is Turing complete in this sense.
        • prolyxis 10 hours ago
          Except that as far as I understand, one of the inspirations for the Turing machine is to explain precisely the computations a human computer could perform with (potentially a lot of) pen and paper.
          • nkrisc 9 hours ago
            Theoretically, not in any sense of reality.
          • stormking 10 hours ago
            And an unlimited life span.
        • thaumasiotes 9 hours ago
          You can do everything a Turing machine can do; obviously, you cannot fail to be Turing complete.
        • wat10000 10 hours ago
          We can simulate a Turing machine, given storage. The infinite storage and infinite time is always a sticking point when comparing any real physical system to a theoretical Turing machine, so we tend to ignore those bits.
          • tshaddox 9 hours ago
            "Unbounded" is a better term to use than "infinite."
    • arjvik 11 hours ago
      As a quick hack, is this flaw fixed by using top-p or top-k or min-p or whatever the current hottest logit sampling algorithm is?
      • FeepingCreature 10 hours ago
        The "flaw" is also fixed by simply not recognizing the end-of-sampling token. Even the limited size of the context window can be worked around with tool calls.
    • fi-le 9 hours ago
      Thank you for the nitpick, corrected!
  • enriquepablo 10 hours ago
    The Chomsky hierarchy is a classification of grammars. So it's about Syntax.

    GPT is purely about semantics, about truth. To the extent that I think it can be said to be fundamentally informal in its dedication to truth, since it can easily be prompted to produce syntactically incorrect output - wrt the syntax of whatever language whose semantics determine the truth GPT is asked to reflect.

    So I think this is a categorically incorrect question.

    I mean, what is the fundamental syntax of the output of an LLM? It would have to be that any sequence of utf8 chars is valid.

    • suddenlybananas 9 hours ago
      LLMs have absolutely nothing to do with truth. If anything the analogy is closer to syntax as LLMs are pure form, just language in absence of any context or reference.
      • enriquepablo 9 hours ago
        LLMs tell you one thing and not its negation, this choice is semantic. "just language in absence of any context or reference" is a set that includes every valid text. The usefulness of LLMs is their ability to mostly respect whatever semantics you establish in the prompt in their choice of output.
        • suddenlybananas 6 hours ago
          LLMs will definitely tell you a thing and its negation if you prompt them differently.
          • ursAxZA 6 hours ago
            Isn’t it just symbols in, symbols out? We have hardware that can process long symbol sequences very fast — that’s basically it.
  • Vosporos 11 hours ago
    On Little Saint James, with Chomsky himself.
  • wolfgangbabad 9 hours ago
    Next to Epstein.
  • ozim 10 hours ago
    [flagged]
    • varjag 10 hours ago
      While I share the opinion that Chomsky held back the field of linguistics for decades, the belief that technical or scientific excellence has to beget virtues is a fallacy.
    • sph 10 hours ago
      Who’s we? The social media hivemind that determines what to feel about things?

      Be an adult, think for yourself. The most despicable person to have lived in the modern age made decent paintings in his younger years.

    • WhyOhWhyQ 10 hours ago
      What I gathered is that Chomsky was genuinely friends with Epstein, even after the sex crimes had been revealed (which is disturbing), but that there's no evidence Chomsky himself did anything criminal or immoral beyond being friends with a monster. The charitable interpretation is that Chomsky is just as easily conned by fun-seeming extroverted people as anyone else. But maybe there's more to it.
    • onraglanroad 10 hours ago
      "We"? Who's "we"?

      You're nothing to do with me

      The me that I am is not you

      Not a "we". Not an "us"

      There's no party of blue, or red or white

      Not for me, at least