Turing Machines

(samwho.dev)

70 points | by jakelazaroff 1 day ago

9 comments

  • fen11 3 hours ago
    > In 1936 both Alan Turing and Alonzo Church independently reached the conclusion, using different methods, that the answer is "no."

    I think claiming these efforts were independent is misleading. Church was Turing’s PhD advisor in 1936, and one of the appendices of Turing’s thesis retroactively justifies the lambda calculus definition of computing, by proving it equivalent to his Turing machine definition. That sounds less like competing definitions of computability to me, and instead a story of collaboration to produce a philosophical justification (Turing machines) for Church’s pure mathematical theory (lambda calculus). Which is not to disparage Turing either: creating this philosophical justification was an impressive and fundamentally important achievement. I think Church sometimes gets unfairly maligned in these discussions as an out-of-touch purist who didn’t care to come up with a believable definition, but clearly he cared enough to support his student working on the problem!

    Is there some evidence for the independence or competitiveness of their work that I’ve missed?

    Anyway, apart from this nitpick about the introduction, I appreciate seeing this foundational stuff explained clearly!

    • jarpschop 1 hour ago
      I don't think Turing knew about the existence of the lambda calculus when he wrote the paper. It was later that he decided to study his PhD under Church and move to Princeton.
  • samwho 1 day ago
    I’m the author of this post! <3

    Happy to answer any questions you might have, and love to hear feedback good and bad.

  • sohkamyung 7 hours ago
    I hope this LEGO Ideas Turing Machine gets turned into a released LEGO product [1]. It's not the first LEGO Turing Machine, but it is noteworthy because it is fully mechanical.

    [1] https://ideas.lego.com/projects/10a3239f-4562-4d23-ba8e-f4fc...

    • samwho 7 hours ago
      I love it. I love it so much.
  • bob1029 9 hours ago
    I've been playing around with interpreted machines quite a bit for genetic programming experiments. My current favorite minimal machine has 4 instructions:

    0b00: Increment memory pointer (wraps)

    0b01: Increment memory value (wraps)

    0b10: Jump marker - Jump to next marker if memory value 1, prior if 2 (wraps)

    0b11: Output current memory value

    It has 2 tapes, instruction & memory. Memory is an array of byte.

    This cannot handle any kind of input, but is useful for generating programs that can. Being able to pack 32 instructions into each interpreter machine word opens up some interesting possibilities for how we search the space.

  • VitoVan 4 hours ago
    Thank you, finally I got a chance to understand this concept, nice writing, nice sketches.
  • layer8 8 hours ago
    > By the end of this post, you will know:

    > • What can and cannot be computed.

    I don’t think it delivered on the “can” part. (And I don’t think we really know that well.)

    • voidhorse 3 hours ago
      Unless I am missing something, as far as the work of Turing and Church is concerned, their answers are actually quite clear on this question of "can", namely the class of partial recursive functions. This is precisely what a Turing machine can compute. It is also possible to build up to these functions in a concrete way using the simpler class of primitive recursive functions. Rogers book on recursive functions and computability is a great reference on this topic.
    • samwho 8 hours ago
      When I was working on this post I found the way that Turing defined “computable numbers” I bit unsatisfying as well.

      What would you suggest I do to deliver on this?

      • layer8 8 hours ago
        I would suggest to not promise it. ;)

        Maybe rather:

        • That some truths cannot be computed.

        • samwho 8 hours ago
          I’m not sure I understand. I do explain what can be computed, don’t I?

          > Something is said to be "computable" if there exists an algorithm that can get from the given input to the expected output. For example, adding together 2 integers is computable.

          I could probably have dug into some of the restrictions, like how it has to be a finite number of steps.

          • layer8 7 hours ago
            This defines the term “computable”, but it doesn’t give you a sense of what things can be computed. Defining a term is entirely different from the above promise.

            At the level you’re describing Turing machines, it’s also not clear that readers would have a precise notion of what an algorithm is. At no point (unless I missed it) do you explain that any algorithm is supposed to be implementable as a Turing machine, or the assumption of what is otherwise known as the Church–Turing thesis.

            • samwho 7 hours ago
              That’s true, there’s no explicit definition of what an algorithm is. I may revisit this. Thank you for the feedback.
  • mrymd 1 day ago
    dope
    • samwho 9 hours ago
      Thank you <3
  • zokier 9 hours ago
    I'd argue the key difference between Turing machines and real-world computers is that computers do IO, respond to events, have real-time characteristics, and are interactive. All of these are key features in making computers useful, but are poorly (if at all) captured by Turing machine model. It's one thing to compute something, and another thing to play Doom.
    • bjornsing 2 hours ago
      The key difference between Turing machines and real-world computers is that Turing machines have infinite memory. A real-world computer is thus not really a Turing machine, but rather a finite state machine.
    • layer8 8 hours ago
      You can have your I/O and Doom graphics on the Turing tape no problem.

      What’s different is that accessing an arbitrary position on the tape isn’t O(1), like normally assumed for memory, On the other hand, memory (and address space) on real-world computers is finite, so you can always find a large-enough constant to make the Turing equivalent O(1) again.

    • mikewarot 8 hours ago
      There's an almost Turing equivalent mechanism that has no program counter and runs everything in parallel. I call it a BitGrid (it's my hobby horse). Because you can operate on all of the bits at the same time, you can get deterministic real time performance.

      It's a systolic array of Look Up Tables, 4 bits in, 4 bits out, with a latch on each latched.

    • mithametacs 7 hours ago
      I/O is just built on top of the tape. Real computers reserve a map of memory to I/O.

      This is like saying a CPU isn't a computer. It's sort of right but sort of wrong, you know?