A brief history of liquid computers (2019)

(royalsocietypublishing.org)

35 points | by adrian_mrd 90 days ago

7 comments

  • userbinator 90 days ago
    The first work on using fluids to implement logic gates can be traced back to the late 1950s/early 1960s

    The GM Hydramatic automatic transmission, introduced in 1939, used a "computer" based on spool valves. Perhaps the designers didn't explicitly think of the term "logic gate" but the task of deciding which bands and clutches to apply for a given gear is done by a hydraulic circuit that effectively computes boolean expressions. This was the norm until the late 70s/early 80s when computer controlled transmissions started appearing.

    • xattt 90 days ago
      Valve bodies had some sort of subconscious loose association with ICs. This comment makes the link for me as to why!
      • willis936 90 days ago
        Transistors are electronic valves. Valves are the building blocks for automata and arguably an aspect common to all things that are considered "life".
        • perilunar 89 days ago
          Vacuum tubes (i.e. thermionic vacuum tubes) are 'electronic valves', at least in British english.
        • szvsw 90 days ago
          I always knew HVAC systems were alive! Guess I can finally explain to people why I got into hvac/building tech in a cool way… thanks.

          In all seriousness though:

          What else does a valve come with, necessarily, for it to be a valve - some form of actuation, for without actuation, it is just an aperture or a wall. It is actuation that gives it its “valveness.” And for there to be actuation, there must be some mechanism for driving that actuation: in other words receptors/sensors. Without a sensor, it’s just a flap flailing in the noise, pure chaos. Even a simple one-way valve, diode etc has this notion of sensory reception present physically embodied in its architecture (where the signal and the modulated flow are one and the same in the case of a one way valve). All that is just to say that we can kind of think of a valve as the simplest cybernetic system, ie sensor + actuator. And on the other hand, like you suggested, we can probably boil any cybernetic system down into a network of valves in some abstract sense, things that receive flows modulate them (whether discretely or continuously, analog or digital, soft or wet or hard or dry), and route them according to other signals, etc etc

  • alekseiprokopev 90 days ago
    I really recommend to read Andrew Adamatzky's [https://en.wikipedia.org/wiki/Andrew_Adamatzky] work. I used to study in the Master's program with his son and one day he gave me one of his books "Dynamics of the Crowd-Minds" [https://www.worldscientific.com/worldscibooks/10.1142/5797#t...]. It changed my life.
  • DonHopkins 89 days ago
    Reservoir Computing and Liquid State Machines are deep stuff! It's not a scam: tide goes in, tide goes out, never a miscommunication. ;)

    Reservoir computing (wikipedia.org) 99 points by gyre007 on Oct 18, 2018 | hide | past | favorite | 20 comments

    https://news.ycombinator.com/item?id=18252958

    https://en.wikipedia.org/wiki/Reservoir_computing

    https://en.wikipedia.org/wiki/Liquid_state_machine

    https://news.ycombinator.com/item?id=29654050

    DonHopkins on Dec 22, 2021 | parent | context | favorite | on: Analog computers were the most powerful computers ...

    You should real Steven Wolfram's "A New Kind of Science", and you'll get a much deeper and wider appreciation for just what is a computer and how Turing completeness can apply to so many situations. Even the simplest systems can be universal computers!

    https://en.wikipedia.org/wiki/A_New_Kind_of_Science

    >Generally, simple programs tend to have a very simple abstract framework. Simple cellular automata, Turing machines, and combinators are examples of such frameworks, while more complex cellular automata do not necessarily qualify as simple programs. It is also possible to invent new frameworks, particularly to capture the operation of natural systems. The remarkable feature of simple programs is that a significant percentage of them are capable of producing great complexity. Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things. This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program's definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence. A logical deduction from this phenomenon is that if the details of the program's rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match.

    Even a reservoir of water (or a non-linear mathematical model of one) can be used to piggyback arbitrary computation on the way liquid naturally behaves.

    Here's a paper about literally using a bucket of water and some legos and sensors to perform pattern recognition with a "Liquid State Machine" (see Figure 1: The Liquid Brain):

    https://www.semanticscholar.org/paper/Pattern-Recognition-in...

    >Pattern Recognition in a Bucket. Chrisantha Fernando, Sampsa Sojakka. Published in ECAL 14 September 2003, Computer Science.

    >This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network.

    This idea can be applied to digital neural networks, using a model of a liquid reservoir as a "black box", and training another neural network layer to interpret its output in response to inputs. Instead of training the water (which is futile, since water will do what it wants: as the apologetics genius Bill O'Reilly proclaims, "Tide go in, tide go out, never a miscommunication."), you just train a water interpreter (a linear output layer)!

    https://www.youtube.com/watch?v=NUeybwTMeWo

    Reservoir Computing

    https://en.wikipedia.org/wiki/Reservoir_computing

    >Reservoir computing is a framework for computation derived from recurrent neural network theory that maps input signals into higher dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir.[1] After the input signal is fed into the reservoir, which is treated as a "black box," a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output.[1] The first key benefit of this framework is that training is performed only at the readout stage, as the reservoir dynamics are fixed.[1] The second is that the computational power of naturally available systems, both classical and quantum mechanical, can be used to reduce the effective computational cost.[2]

    >History: The concept of reservoir computing stems from the use of recursive connections within neural networks to create a complex dynamical system.[3] It is a generalisation of earlier neural network architectures such as recurrent neural networks, liquid-state machines and echo-state networks. Reservoir computing also extends to physical systems that are not networks in the classical sense, but rather continuous systems in space and/or time: e.g. a literal "bucket of water" can serve as a reservoir that performs computations on inputs given as perturbations of the surface.[4] The resultant complexity of such recurrent neural networks was found to be useful in solving a variety of problems including language processing and dynamic system modeling.[3] However, training of recurrent neural networks is challenging and computationally expensive.[3] Reservoir computing reduces those training-related challenges by fixing the dynamics of the reservoir and only training the linear output layer.[3]

    >A large variety of nonlinear dynamical systems can serve as a reservoir that performs computations. In recent years semiconductor lasers have attracted considerable interest as computation can be fast and energy efficient compared to electrical components.

    >Recent advances in both AI and quantum information theory have given rise to the concept of quantum neural networks.[5] These hold promise in quantum information processing, which is challenging to classical networks, but can also find application in solving classical problems.[5][6] In 2018, a physical realization of a quantum reservoir computing architecture was demonstrated in the form of nuclear spins within a molecular solid.[6] However, the nuclear spin experiments in [6] did not demonstrate quantum reservoir computing per se as they did not involve processing of sequential data. Rather the data were vector inputs, which makes this more accurately a demonstration of quantum implementation of a random kitchen sink[7] algorithm (also going by the name of extreme learning machines in some communities). In 2019, another possible implementation of quantum reservoir processors was proposed in the form of two-dimensional fermionic lattices.[6] In 2020, realization of reservoir computing on gate-based quantum computers was proposed and demonstrated on cloud-based IBM superconducting near-term quantum computers.[8]

    >Reservoir computers have been used for time-series analysis purposes. In particular, some of their usages involve chaotic time-series prediction,[9][10] separation of chaotic signals,[11] and link inference of networks from their dynamics.[12]

    Liquid State Machine

    https://en.wikipedia.org/wiki/Liquid_state_machine

    >A liquid state machine (LSM) is a type of reservoir computer that uses a spiking neural network. An LSM consists of a large collection of units (called nodes, or neurons). Each node receives time varying input from external sources (the inputs) as well as from other nodes. Nodes are randomly connected to each other. The recurrent nature of the connections turns the time varying input into a spatio-temporal pattern of activations in the network nodes. The spatio-temporal patterns of activation are read out by linear discriminant units.

    Echo State Network

    https://en.wikipedia.org/wiki/Echo_state_network

    >The echo state network (ESN)[1][2] is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.

  • bloopernova 90 days ago
    Quickly skimmed the paper, I didn't see this economy modeling computer:

    https://en.wikipedia.org/wiki/Phillips_Machine?wprov=sfla1

    • rnewme 90 days ago
      Second picture in the paper :)
      • bloopernova 90 days ago
        D'oh! Now I see it.

        Sorry about that!

  • zzbn00 90 days ago
    On a quick scan does not seem to mention mercury delay lines which were a liquid memory technology commonly used in early days of electronic computing: https://en.wikipedia.org/wiki/Delay-line_memory#Mercury_dela... .

    Photo of what the system in Cambridge UK with Maurice Wilkes next to it:

    https://www.computerhistory.org/revolution/memory-storage/8/...

    • avmich 90 days ago
      Not sure if mercury delay lines qualify as liquid computers here, do they work without electricity involved?
      • adrian_b 90 days ago
        I do not think that they qualify, because their function as delay lines did not depend in any way on the fact that they happened to use a liquid medium.

        Delay lines with sounds or elastic waves can be made using any propagation medium, solid, liquid or gaseous. The only desirable property is to have a low velocity for the waves. Also it may be useful for the propagation medium to be piezoelectric or magnetostrictive, because that simplifies the transducers from and to electric signals.

        While the mercury delay lines were important for a small number of the first electronic computers, some other early computers have used delay lines made with magnetostrictive metallic wires.

        The delay lines can also be used as analog memories, not only as digital memories. As analog memories, they have remained in use for many decades after they have been abandoned in computers, e.g. in the color TV sets or in radars. All later delay lines used solid glasses or crystals, not liquids.

  • kmoser 90 days ago
    > A liquid is an incompressible fluid.

    But can't most fluids be compressed, even if just a bit?

    • adrian_b 90 days ago
      Anything can be compressed.

      Nevertheless, the difference in compressibility between liquids and gases is huge.

      Gases can be compressed with small pressures so that their volume is reduced to a small fraction of the original volume, while applying great pressures to a liquid will result in a volume only slightly smaller than the original volume.

      The compressibility of gases can never be neglected, while in many practical applications of liquids it is possible to approximate them as ideal liquids, which are incompressible.

      The approximation of incompressible liquids is like the approximations of inextensible wires, inextensible membranes or rigid bodies. Even if such ideal objects do not exist, such models simplify the solutions of problems and for many practical cases the solutions are close enough to the solutions that would take into account the deformations or volume changes of the bodies.

    • samatman 90 days ago
      Yes.

      We refer to those ones as gasses.

  • Harmohit 90 days ago
    If we define a computer in very broad terms: a system used to emulate/simulate another system, could we call a wind tunnel a computer? It is a system that is used to infer what would happen high up in the atmosphere or on the race track. Taking it a step further, do animals used for drug testing count as computers? They are used to infer any potential adverse effects in a human body.

    Although quite specialized, I think these things would still classify as a computer.

    • GTP 90 days ago
      I think it would make more sense to limit the analysis to technologies that let you build a Turing-complete machine, but indeed sometimes you find people counting your examples as computers, because they are computing a specific function.