Turing Machines

(samwho.dev)

115 points | by jakelazaroff 205 days ago

12 comments

  • bubblyworld 204 days ago
    My favourite trivia about "On Computable Numbers" is that Alan Turing got the definition of computable reals wrong! The way he defines them - namely that a computable real is a Turing machine that generates the sequence of digits of the real - has severe issues, because addition and other basic operations are incomputable due to subtle diagonalisation arguments (see https://jdh.hamkins.org/alan-turing-on-computable-numbers/ for instance).

    It's interesting that Turing didn't realise this, as the whole paper revolves around diagonalisation arguments in a few places.

    The "modern" take is to define a computable real to be a program that takes an epsilon as input and returns a rational number within that epsilon from the real being represented. Or something similar - in this case it is very simple to define addition, for instance, as you have control over these bounds.

    One more fact - equality is incomputable for both of these representations! There's no program that can uniformly decide whether two computable reals are the same, no matter how you cook up the definition. This one maybe isn't too surprising - in some sense no matter how many digits or bounds you check between two reals you can never be sure there isn't a smaller gap between them you haven't gotten to yet.

    • a57721 204 days ago
      Turing's definition of computable reals can't be wrong, because listing decimal digits is equivalent to giving rational approximations. The problem here is a confusion between real numbers and Turing machines that compute them.
      • bubblyworld 204 days ago
        They are logically equivalent definitions (they determine the same set of computable reals) but they are not equivalent from the perspective of computability. As I said, with the first definition you cannot compute addition, whereas with the second you can. I highly recommend reading the blog post I linked which proves this - the issues involve self-reference and are quite subtle (like much of computability theory!).

        This has nothing to do with such a "confusion", by the way, which is obvious to anyone writing about this stuff (including me). Rather it's the difference between something being logically definable and computable, which Turing himself writes about in his paper.

        Wrong is a judgement call, of course, but given that we're talking about computability here I think it's fair to call Turing's definition flawed. There's a reason nobody uses it in computable analysis.

        • a57721 204 days ago
          I skimmed through the page that you linked and I am familiar with computability. When defining "computable reals", Turing defines a subset of real numbers, the only flaw is that he doesn't prove that this subset is closed under addition and multiplication. A proof appears in [1] where Rice indeed gives more convenient definitions and also cites Turing's paper for an "intuitive definition". Saying that Turing was wrong is a big stretch based on a deliberately introduced confusion.

          [1] H. G. Rice, Recursive real numbers, Proc. Amer. Math. Soc. 5 (1954) https://doi.org/10.1090/S0002-9939-1954-0063328-5

          • bubblyworld 203 days ago
            You are still misunderstanding me. The set of computable reals given by Turing's definition is exactly the same as the set of computable reals given by the modern definition.

            However, under Turing's definition there is no algorithm that can uniformly compute addition or multiplication! This has nothing to do with whether the set of computable reals is closed under those operations (although it is, of course - this is an easy exercise, no need to cite a paper).

            I'll be more formal. Let us say a Turing machine A is a "Turing-real for x" if A outputs the successive digits of x when run. Then consider the following problem: given two Turing-reals A for a and B for b as input, output a Turing-real for a+b.

            It turns out this problem is incomputable!

            On the other hand the same problem but for the modern "approximation-reals" is computable, a marked improvement.

            The lack of computable addition and multiplication is the flaw in Turing's definition. This is well known stuff - there's even a discussion of it on the Wikipedia page for computable reals. The fact that you keep bringing up orthogonal issues, like closure of the computable reals and the distinction between a Turing machine coding for a real and the real itself, makes me think that you might be missing the crux of the matter here.

            • a57721 203 days ago
              I perfectly understand you from the beginning, and I still think that the blog post that you mention is nitpicking on a minor detail. I mentioned the paper by Rice (that is indeed elementary) as an example of someone citing Turing without making a big deal from the difference in the definitions. Let's agree that Turing's definition is "inconvenient", but not really "wrong".
              • bubblyworld 203 days ago
                Okay, I see. I guess from my point of view, it's not a minor detail. It is simply an unworkable definition if you want to study computable reals in any meaningful way (i.e. beyond the absolute basics like defining the set of computable reals).

                By the way I read through that paper you linked and I think it's possible Rice wasn't aware of the issue here either. He references Turing's paper and notes that Turing's definition is equivalent to his, but what I've been getting at here is that this equivalence itself is not computable. There's no effective procedure to convert between Turing-reals and Rice-reals, even though you can prove that they define equivalent subsets of R.

                Turing himself proves this in "On Computable Numbers, with an Application to the Entscheidungsproblem: A Correction", so it seems he came to realise his error.

  • fen11 204 days ago
    > In 1936 both Alan Turing and Alonzo Church independently reached the conclusion, using different methods, that the answer is "no."

    I think claiming these efforts were independent is misleading. Church was Turing’s PhD advisor in 1936, and one of the appendices of Turing’s thesis retroactively justifies the lambda calculus definition of computing, by proving it equivalent to his Turing machine definition. That sounds less like competing definitions of computability to me, and instead a story of collaboration to produce a philosophical justification (Turing machines) for Church’s pure mathematical theory (lambda calculus). Which is not to disparage Turing either: creating this philosophical justification was an impressive and fundamentally important achievement. I think Church sometimes gets unfairly maligned in these discussions as an out-of-touch purist who didn’t care to come up with a believable definition, but clearly he cared enough to support his student working on the problem!

    Is there some evidence for the independence or competitiveness of their work that I’ve missed?

    Anyway, apart from this nitpick about the introduction, I appreciate seeing this foundational stuff explained clearly!

    • jarpschop 204 days ago
      I don't think Turing knew about the existence of the lambda calculus when he wrote the paper. It was later that he decided to study his PhD under Church and move to Princeton.
      • samwho 204 days ago
        This is correct, they were independent works. It’s spoken about in “The Annotated Turing” by Charles Petzold.
  • samwho 205 days ago
    I’m the author of this post! <3

    Happy to answer any questions you might have, and love to hear feedback good and bad.

  • sohkamyung 204 days ago
    I hope this LEGO Ideas Turing Machine gets turned into a released LEGO product [1]. It's not the first LEGO Turing Machine, but it is noteworthy because it is fully mechanical.

    [1] https://ideas.lego.com/projects/10a3239f-4562-4d23-ba8e-f4fc...

    • samwho 204 days ago
      I love it. I love it so much.
  • bob1029 204 days ago
    I've been playing around with interpreted machines quite a bit for genetic programming experiments. My current favorite minimal machine has 4 instructions:

    0b00: Increment memory pointer (wraps)

    0b01: Increment memory value (wraps)

    0b10: Jump marker - Jump to next marker if memory value 1, prior if 2 (wraps)

    0b11: Output current memory value

    It has 2 tapes, instruction & memory. Memory is an array of byte.

    This cannot handle any kind of input, but is useful for generating programs that can. Being able to pack 32 instructions into each interpreter machine word opens up some interesting possibilities for how we search the space.

  • thih9 204 days ago
    > You're telling me that everything can be computed with just these 5 instructions? Word processors, YouTube, video games, all of it?

    Note the “computed” - this is about computing the results, there’s no mention of IO or UI.

    E.g. a Turing complete language might still not allow you to build a word processor or YouTube; you could compute the color of every pixel, but you’d still need a way to display it.

    • Codrus 204 days ago
      I/O is via tape. UI is reading the tape after the computer has written to it.
  • qbane 204 days ago
    An essential point that the article did not address is that Turing machines can be efficiently encoded and be simulated by another Turing machine. This is the real power that causes the impossibility to solve its own halting problem.
  • layer8 204 days ago
    > By the end of this post, you will know:

    > • What can and cannot be computed.

    I don’t think it delivered on the “can” part. (And I don’t think we really know that well.)

    • voidhorse 204 days ago
      Unless I am missing something, as far as the work of Turing and Church is concerned, their answers are actually quite clear on this question of "can", namely the class of partial recursive functions. This is precisely what a Turing machine can compute. It is also possible to build up to these functions in a concrete way using the simpler class of primitive recursive functions. Rogers book on recursive functions and computability is a great reference on this topic.
    • samwho 204 days ago
      When I was working on this post I found the way that Turing defined “computable numbers” I bit unsatisfying as well.

      What would you suggest I do to deliver on this?

      • layer8 204 days ago
        I would suggest to not promise it. ;)

        Maybe rather:

        • That some truths cannot be computed.

        • samwho 204 days ago
          I’m not sure I understand. I do explain what can be computed, don’t I?

          > Something is said to be "computable" if there exists an algorithm that can get from the given input to the expected output. For example, adding together 2 integers is computable.

          I could probably have dug into some of the restrictions, like how it has to be a finite number of steps.

          • layer8 204 days ago
            This defines the term “computable”, but it doesn’t give you a sense of what things can be computed. Defining a term is entirely different from the above promise.

            At the level you’re describing Turing machines, it’s also not clear that readers would have a precise notion of what an algorithm is. At no point (unless I missed it) do you explain that any algorithm is supposed to be implementable as a Turing machine, or the assumption of what is otherwise known as the Church–Turing thesis.

            • samwho 204 days ago
              That’s true, there’s no explicit definition of what an algorithm is. I may revisit this. Thank you for the feedback.
  • VitoVan 204 days ago
    Thank you, finally I got a chance to understand this concept, nice writing, nice sketches.
  • mrymd 205 days ago
    dope
    • samwho 204 days ago
      Thank you <3
  • zokier 204 days ago
    I'd argue the key difference between Turing machines and real-world computers is that computers do IO, respond to events, have real-time characteristics, and are interactive. All of these are key features in making computers useful, but are poorly (if at all) captured by Turing machine model. It's one thing to compute something, and another thing to play Doom.
    • bjornsing 204 days ago
      The key difference between Turing machines and real-world computers is that Turing machines have infinite memory. A real-world computer is thus not really a Turing machine, but rather a finite state machine.
    • layer8 204 days ago
      You can have your I/O and Doom graphics on the Turing tape no problem.

      What’s different is that accessing an arbitrary position on the tape isn’t O(1), like normally assumed for memory, On the other hand, memory (and address space) on real-world computers is finite, so you can always find a large-enough constant to make the Turing equivalent O(1) again.

    • mikewarot 204 days ago
      There's an almost Turing equivalent mechanism that has no program counter and runs everything in parallel. I call it a BitGrid (it's my hobby horse). Because you can operate on all of the bits at the same time, you can get deterministic real time performance.

      It's a systolic array of Look Up Tables, 4 bits in, 4 bits out, with a latch on each latched.

    • mithametacs 204 days ago
      I/O is just built on top of the tape. Real computers reserve a map of memory to I/O.

      This is like saying a CPU isn't a computer. It's sort of right but sort of wrong, you know?