> In 1936 both Alan Turing and Alonzo Church independently reached the conclusion, using different methods, that the answer is "no."
I think claiming these efforts were independent is misleading. Church was Turing’s PhD advisor in 1936, and one of the appendices of Turing’s thesis retroactively justifies the lambda calculus definition of computing, by proving it equivalent to his Turing machine definition. That sounds less like competing definitions of computability to me, and instead a story of collaboration to produce a philosophical justification (Turing machines) for Church’s pure mathematical theory (lambda calculus). Which is not to disparage Turing either: creating this philosophical justification was an impressive and fundamentally important achievement. I think Church sometimes gets unfairly maligned in these discussions as an out-of-touch purist who didn’t care to come up with a believable definition, but clearly he cared enough to support his student working on the problem!
Is there some evidence for the independence or competitiveness of their work that I’ve missed?
Anyway, apart from this nitpick about the introduction, I appreciate seeing this foundational stuff explained clearly!
I don't think Turing knew about the existence of the lambda calculus when he wrote the paper. It was later that he decided to study his PhD under Church and move to Princeton.
I hope this LEGO Ideas Turing Machine gets turned into a released LEGO product [1]. It's not the first LEGO Turing Machine, but it is noteworthy because it is fully mechanical.
I've been playing around with interpreted machines quite a bit for genetic programming experiments. My current favorite minimal machine has 4 instructions:
0b00: Increment memory pointer (wraps)
0b01: Increment memory value (wraps)
0b10: Jump marker - Jump to next marker if memory value 1, prior if 2 (wraps)
0b11: Output current memory value
It has 2 tapes, instruction & memory. Memory is an array of byte.
This cannot handle any kind of input, but is useful for generating programs that can. Being able to pack 32 instructions into each interpreter machine word opens up some interesting possibilities for how we search the space.
Unless I am missing something, as far as the work of Turing and Church is concerned, their answers are actually quite clear on this question of "can", namely the class of partial recursive functions. This is precisely what a Turing machine can compute. It is also possible to build up to these functions in a concrete way using the simpler class of primitive recursive functions. Rogers book on recursive functions and computability is a great reference on this topic.
I’m not sure I understand. I do explain what can be computed, don’t I?
> Something is said to be "computable" if there exists an algorithm that can get from the given input to the expected output. For example, adding together 2 integers is computable.
I could probably have dug into some of the restrictions, like how it has to be a finite number of steps.
This defines the term “computable”, but it doesn’t give you a sense of what things can be computed. Defining a term is entirely different from the above promise.
At the level you’re describing Turing machines, it’s also not clear that readers would have a precise notion of what an algorithm is. At no point (unless I missed it) do you explain that any algorithm is supposed to be implementable as a Turing machine, or the assumption of what is otherwise known as the Church–Turing thesis.
I'd argue the key difference between Turing machines and real-world computers is that computers do IO, respond to events, have real-time characteristics, and are interactive. All of these are key features in making computers useful, but are poorly (if at all) captured by Turing machine model. It's one thing to compute something, and another thing to play Doom.
The key difference between Turing machines and real-world computers is that Turing machines have infinite memory. A real-world computer is thus not really a Turing machine, but rather a finite state machine.
You can have your I/O and Doom graphics on the Turing tape no problem.
What’s different is that accessing an arbitrary position on the tape isn’t O(1), like normally assumed for memory, On the other hand, memory (and address space) on real-world computers is finite, so you can always find a large-enough constant to make the Turing equivalent O(1) again.
There's an almost Turing equivalent mechanism that has no program counter and runs everything in parallel. I call it a BitGrid (it's my hobby horse). Because you can operate on all of the bits at the same time, you can get deterministic real time performance.
It's a systolic array of Look Up Tables, 4 bits in, 4 bits out, with a latch on each latched.
I think claiming these efforts were independent is misleading. Church was Turing’s PhD advisor in 1936, and one of the appendices of Turing’s thesis retroactively justifies the lambda calculus definition of computing, by proving it equivalent to his Turing machine definition. That sounds less like competing definitions of computability to me, and instead a story of collaboration to produce a philosophical justification (Turing machines) for Church’s pure mathematical theory (lambda calculus). Which is not to disparage Turing either: creating this philosophical justification was an impressive and fundamentally important achievement. I think Church sometimes gets unfairly maligned in these discussions as an out-of-touch purist who didn’t care to come up with a believable definition, but clearly he cared enough to support his student working on the problem!
Is there some evidence for the independence or competitiveness of their work that I’ve missed?
Anyway, apart from this nitpick about the introduction, I appreciate seeing this foundational stuff explained clearly!
Happy to answer any questions you might have, and love to hear feedback good and bad.
[1] https://ideas.lego.com/projects/10a3239f-4562-4d23-ba8e-f4fc...
0b00: Increment memory pointer (wraps)
0b01: Increment memory value (wraps)
0b10: Jump marker - Jump to next marker if memory value 1, prior if 2 (wraps)
0b11: Output current memory value
It has 2 tapes, instruction & memory. Memory is an array of byte.
This cannot handle any kind of input, but is useful for generating programs that can. Being able to pack 32 instructions into each interpreter machine word opens up some interesting possibilities for how we search the space.
> • What can and cannot be computed.
I don’t think it delivered on the “can” part. (And I don’t think we really know that well.)
What would you suggest I do to deliver on this?
Maybe rather:
• That some truths cannot be computed.
> Something is said to be "computable" if there exists an algorithm that can get from the given input to the expected output. For example, adding together 2 integers is computable.
I could probably have dug into some of the restrictions, like how it has to be a finite number of steps.
At the level you’re describing Turing machines, it’s also not clear that readers would have a precise notion of what an algorithm is. At no point (unless I missed it) do you explain that any algorithm is supposed to be implementable as a Turing machine, or the assumption of what is otherwise known as the Church–Turing thesis.
What’s different is that accessing an arbitrary position on the tape isn’t O(1), like normally assumed for memory, On the other hand, memory (and address space) on real-world computers is finite, so you can always find a large-enough constant to make the Turing equivalent O(1) again.
It's a systolic array of Look Up Tables, 4 bits in, 4 bits out, with a latch on each latched.
This is like saying a CPU isn't a computer. It's sort of right but sort of wrong, you know?