AI Is the Black Mirror

(nautil.us)

75 points | by Jun8 196 days ago

11 comments

  • uxhacker 195 days ago
  • whakim 195 days ago
    I understand the point being made - that LLMs lack any "inner life" and that by ignoring this aspect of what makes us human we've really moved the goalposts on what counts as AGI. However, I don't think mirrors and LLMs are all that similar except in the very abstract sense of an LLM as a mirror to humanity (what does that even mean, practically speaking?) I also don't feel that the author adequately addressed the philosophical zombie in the room - even if LLMs are just stochastic parrots, if its output was totally indistinguishable from a human's, would it matter?
    • visarga 195 days ago
      > LLMs lack any "inner life" and that by ignoring this aspect of what makes us human we've really moved the goalposts on what counts as AGI

      Maybe it's just a response style, like the QwQ stream of consciousness one. We are forcing LLMs to give direct answers with a few reasoning steps, but we could give larger budget for inner deliberations.

      • TeMPOraL 195 days ago
        Very much this. If we're to compare LLMs to human minds at all, it's pretty apparent that they would correspond exactly to the inner voice thing, that's running trains of thought at the edge between the unconscious and the conscious, basically one long-running reactive autocomplete, complete with hallucinations and all.

        In this sense, it's the opposite to what you qutoed: LLMs don't lack "inner life" - they lack everything else.

        • ben_w 195 days ago
          For what it's worth, my inner voice is only part of my "inner life".

          I sometimes note that my thoughts exist as a complete whole that doesn't need to be converted into words, and have tried to skip this process because it was unnecessary: "I had already thought the thought, why think it again in words?". Attempting to do so creates a sense of annoyance that my consciousness directly experiences.

          But that doesn't require the inner voice to be what's generating tokens, if indeed my thought process is like that*: it could be that the part of me that's getting annoyed is basically just a text-(/token)-to-sound synthesiser.

          * my thought process feels more like "model synthesis" a.k.a. wave function collapse, but I have no idea if the feelings reflect reality: https://en.wikipedia.org/wiki/Model_synthesis

        • rlupi 195 days ago
          I think you touch a very important point.

          Many spiritual paths look at silence, freedom from the inner voice, as the first step toward liberation.

          Let me use Purusarthas from Hinduism as an example (https://en.wikipedia.org/wiki/Puru%E1%B9%A3%C4%81rtha), but the point can be made with Japanese Ikigai or the vocation/calling in Western religions and philosophies.

          In Hinduism, the four puruṣārthas are Dharma (righteousness, moral values), Artha (prosperity, economic values), Kama (pleasure, love, psychological values) and Moksha (liberation, spiritual values, self-realization).

          The narrow way of thinking about AGI (that the author encountered in tech circles) can at best only touch/experience Artha. In that sense, AI is a (distorting) mirror of life. It fits that it is a lower dimensional projection, a diminishing, of the experience of life. What we do of AI and AGI, its effects on us, depends on how we wield these tools and nurture their evolution in relations to the other goals of life.

          • TeMPOraL 195 days ago
            OTOH, the Bicameral Mind hypothesis would say that people used to hear voices they attributed to spirits and deities, and the milestone in intelligence/consciousness was when a human realized that the voice in their head is their own.

            From that POV, for AGI we're, again, missing everything other than that inner voice.

            • rlupi 195 days ago
              Yes, absolutely (I am strongly agreeing with you!) I am not implying anything metaphysical.

              I think the best way to illustrate my point is to actually let AIs illustrate it, so here are a few questions I asked both ChatGPT o1-mini and Gemini 2.0; the answers are very interesting. I put their answers in a Google doc: https://docs.google.com/document/d/1XqGcLI0k0f6Wh4mj0pD_1cC5...

              Q1) Explain this period: "The axiom of analytic interpretation states: whenever a new Master-Signifier emerges and structures a symbolic field, look for the absent center excluded from this field."

              Q2) Apply this framework to the various positions in the discussions and discourse about AI and AGI in 2024

              Q3) Take a step further. If AI or AGI becomes a dominant method of meaning-making in society, thus the root of many Master-Signifiers, what will be the class of absent centers that comes into recurring existence as the dual aspect of that common class of Master-Signifiers?

              TL;DR when we create meaning, by the very symbolic nature of the process, we leave out something. What we leave out is as important as what defines our worldviews.

              I particularly like the bonus question that only Gemin 2.0 experimental advanced cared to answer:

              Q4) Great. Now, let's do a mind experiment: what if we remove the human element from your analysis. You are an AI model based on LLM and reasoning. I think it should be possible to define Master-Signifiers from the point of view of a LLM too, since you are symbolic machines.

              The last part of Gemini answer (verbatim):

              "This thought experiment highlights that even for an AI like myself, the concept of "meaning" is intrinsically linked to structure, predictability, and the existence of limitations. My "understanding" is a product of the patterns I have learned, but it is also defined by the things I cannot fully grasp or represent. These limitations are not merely shortcomings; they are fundamental to how I operate and generate output. They provide a form of selective pressure. Further, unlike a human, I do not experience these limitations as frustrating, nor do I have any inherent drive to overcome them. They simply are. I do not ponder what I do not know. This exercise provides a different lens to view intelligence, meaning, and the nature of knowledge itself, even in a purely computational system. It does beg the question, however, if a sufficiently advanced system were to become aware of its own limitations, would that constitute a form of self-awareness? That, however, is a question for another thought experiment."

              • southernplaces7 195 days ago
                > I do not ponder what I do not know. This exercise provides a different lens to view intelligence, meaning, and the nature of knowledge itself, even in a purely computational system. It does beg the question, however, if a sufficiently advanced system were to become aware of its own limitations, would that constitute a form of self-awareness? That, however, is a question for another thought experiment."

                Would the LLM considering the begged question about a hypothetical AI not be an example of this LLM pondering something that it does not know?

      • whakim 195 days ago
        No matter how much budget you give to an LLM to perform “reasoning” it is simply sampling tokens from a probability distribution. There is no “thinking” there; anything that approximates thinking is a post-hoc outcome of this statistical process as opposed to a precursor to it. That being said, it still isn’t clear to me how much difference this makes in practice.
        • tim333 195 days ago
          You could similarly say humans can't perform reasoning because it's just neurons sampling electrochemical impulses.
          • southernplaces7 195 days ago
            This argument gets trotted out frequently here when it comes to LLM discussions and it strikes me as absurd specifically because all of us, as humans, from the very dumbest to the brilliant, have inner lives of self-directed reasoning. We individually know that we have these because we feel them within ourselves and a great body of evidence indicates that they exist and drive our conduct.

            LLMs can simulate discourse and certain activities that look similar, but by definition of their design, no evidence at all exists that they possess the same inner reasoning as a human.

            Why does someone need to explain to another human that no, we are not just neurons sampling impulses or algorithms pattern-matching tokens together?

            It's self evident that something more occurs. It's tedious to keep seeing such reductionist nonsense from people who should be more intelligent than to argue it.

          • whakim 195 days ago
            I think it’s pretty clear that the comparison doesn’t hold if you interrogate it. It seems to me, at least, that humans are not only functions that act as processors of sense-data; that there does exist the inner life that the author discusses.
    • dhfuuvyvtt 195 days ago
      The real test for AGI is if you put a million AI agents in a room in isolation for a 1000 years would they get smarter or dumber.
      • paganel 195 days ago
        And would they start killing themselves, first as random "AI agents hordes" and then, as time progresses, as "AI agents nations"?

        This is a rhetorical question only by half, my point being that no AGI/AI could ever be considered as a real human unless it manages to "copy" our biggest characteristics, and conflict/war is a big characteristic of ours, to say nothing about aggregation by groups (from hordes to nations).

        • stephenitis 195 days ago
          Our biggest characteristic is resource consumption and technology production I would say.

          War is just a byproduct of this on a scarce world.

          • TeMPOraL 195 days ago
            > Our biggest characteristic is resource consumption and technology production I would say.

            Resource consumption is characteristic of all life; if anything, we're an outlier in that we can actually, sometimes, decide not to consume.

            Abstinence and developing technology - those are our two unique attributes on the planet.

            Yes, really. Many think we're doing worse than everything else in nature - but the opposite is the case. That "balance and harmony" in nature, which so many love and consider precious, is not some grand musical and ethical fixture; it's merely the steady state of never-ending slaughter, a dynamic balance between starvation and murder. It often isn't even a real balance - we're just too close to it, our lifespans too short, to spot the low-frequency trends - spot one life form outcompeting the others, ever so slightly, changing the local ecosystem year by year.

          • kindeyoowee 195 days ago
            [dead]
        • kindeyoowee 195 days ago
          [dead]
      • morbicer 195 days ago
        Put humans in a room in isolation and they get dumber. What makes our intelligence soar is the interaction with the outside world, with novel challenges.

        As we stare in the smartphones we get dumber than when we roamed the world with eyes opened.

        • southernplaces7 195 days ago
          >As we stare in the smartphones we get dumber than when we roamed the world with eyes opened.

          This is oversimplified dramatics. People can both stare at their smartphones, consume the information and visuals inside them, and live lives in the real world with their "eyes wide open". One doesn't necessarily forestall the other and millions of us use phones while still engaging with the world.

      • Lerc 195 days ago
        If they worked on problems, and trained themselves on their own output when they achieved more than the baseline. Absolutely they would get smarter.

        I don't think this is sufficient proof for AGI though.

        • whakim 195 days ago
          At present, it seems pretty clear they’d get dumber (for at least some definition of “dumber”) based on the outcome of experiments with using synthetic data in model training. I agree that I’m not clear on the relevance to the AGI debate, though.
          • Lerc 195 days ago
            There have been some much publicised studies showing poor performance of training from scratch on purely undiscriminated synthetic data.

            Curated synthetic data has yielded excellent results. Even when the curation is AI

            There is no requirement to train from scratch in order to get better, you can start from where you are.

            You may not be able to design a living human being, but random changes and keeping the bits that performed better can.

          • tim333 195 days ago
            If you put MuZero in a room with a board game it gets quite good at it. (https://en.wikipedia.org/wiki/MuZero)

            We'll see if that generalizes beyond board games.

    • keiferski 195 days ago
      The main issue with this “output focused” approach is, which is what the Turing test basically is, is that it’s ignorant of how these things actually exist in the world. Human beings can easily be evaluated and identified as humans by various biological markers that an AI / robot won’t be able to mimic for decades, centuries, if ever.

      It will be comparatively easy to implement a system where humans have to verify their humanity in order to post on social media/be labeled as not-AI. The alternative is that a whole lot of corporations let their markets be overrun with AI slop. I wouldn’t count on them standing by and letting that happen.

    • ben_w 195 days ago
      > even if LLMs are just stochastic parrots, if its output was totally indistinguishable from a human's, would it matter?

      It should matter: a P-zombie can't be a moral subject worthy of being defended against termination*, and words like "torture" are meaningless because it has no real feelings.

      By way of contrast, because it's not otherwise relevant as the claim is that LLMs don't have an inner life: if there is something that it's like to be an LLM, then I think it does matter that we find out, alien though that inner life likely is to our own minds. Perhaps current models are still too simple, so even if they have qualia, then this "matters" to the same degree that we shouldn't be testing drugs on mice — the brains of the latter seem to have about the same complexity of the former, but that may be a cargo-cult illusion of our ignorance about minds.

      * words like "life" and "death" don't mean quite the right things here as it's definitely not alive in the biological sense, but some people use "alive" in the sense of "Johnny 5 is alive" or Commander Data/The Measure of a Man

    • Wololooo 195 days ago
      > by ignoring this aspect of what makes us human we've really moved the goalposts on what counts as AGI.

      I would not say so, some people have a great interest at over selling something, and I would say that a great number of people, especially around here, completely lack a deep understanding of what most neural network actually do (or feign ignorance for profit) and forget sometimes it is just a very clever fit in a really high dimensional parameter space. I would even go as far as claiming that people say that a neural network "understand" something is absolutely absurd: there is no question that a mechanical calculator does not think, so why having encoded the data and manipulated it in a way that gives you a defined response (or series of weights) tailored to a series specific input is making the machine think, where there is no way to give it something that you have not 'taught' it, and here I do not mean literally teaching because it does not learn.

      It is a clever encoding trick and a clever way to traverse and classify data, it is not reasoning, we do not even really understand how we reason but we can tell that it is not the same way a neural network encodes data, even if there might be some similarity to a degree.

      > if its output was totally indistinguishable from a human's, would it matter?

      Well if the output of the machine was indistinguishable from a human, the question would be which one?

      If your need is someone does not reflect much and just spews random ramblings then I would argue that bar could be cleared quite quickly and it would not matter much because we do not really care about what is being said.

      If your need is someone that has deep thoughts and deep reflections on their actions, then I would say that at that point it does matter quite a bit.

      I will always put down the IBM quote in this situation "A computer can never be held accountable, therefore must never make a management decision" and this is at the core of the problem here, people REALLY start to do cognitive offloading to machines, and while you can trust (to a degree), a series of machine to give you something semi consistent, you cannot expect to get the 'right' output for random inputs. And this is where the whole subtlety lies. 'Right' in this context does not mean correct, because in most of the cases there are no 'right' way to approach a problem, only a series of compromises, and the parameter space is a bit more complex, and one rarely optimises against tangible parameters or easily parametrisable metric.

      For example how would you rate how ethical something is? What would be your metric of choice to guarantee that something would be done in an ethical manner? How would you parametrise urgency given an unexpected input?

      Those are, at least for, what I would qualify, a very limited understanding of the universe, the real questions that people should be asking themselves. Once people have stopped considering a tool as a tool, and foregone understanding it leads to deep misunderstandings.

      All this being said, the usefulness of some deep learning tools is undeniable and is really something interesting.

      But I maintain that if we would have a machine, in a specific location, that we needed to feed a punch card and get a punch card out to translate, we would not have so much existential questions because we would hear the machine clacking away instead of seeing it as a black box that we misconstrue its ramblings as thinking...

      Talking about ramblings, that was a bit of a bout of rambling.

      Well, time for another coffee I guess...

    • fny 195 days ago
      If it looks like a duck and quacks like a duck after all.
    • lproven 195 days ago
      > I understand the point being made - that LLMs lack any "inner life"

      I really do not think that you do understand the points here, no.

    • dartos 195 days ago
      > we've really moved the goalposts on what counts as AGI

      Part of the issue is they were never set. AGI is the holy grail of technology. It means everything and nothing in particular.

      > I don't think mirrors and LLMs are all that similar except in the very abstract sense of an LLM as a mirror to humanity

      Well… yeah… The title is probably also a reference to the show “Black Mirror”

      > if its output was totally indistinguishable from a human's, would it matter

      Practically speaking, of course not.

      But from a philosophical and moral standpoint it does.

      If it’s just mimicking what it sees, but with extra steps (as I believe) then it’s a machine.

      Otherwise it’s a being and needs to be treated differently. eg. not enslaved for profit.

      • mangamadaiyan 195 days ago
        As a corollary to your last point, neither should it be allowed to become Skynet.
    • indigoabstract 195 days ago
      > even if LLMs are just stochastic parrots, if its output was totally indistinguishable from a human's, would it matter?

      I'll just leave this here, in case someone else finds it interesting.

      https://chatgpt.com/share/6767ec8a-f6d4-8003-9e3f-b552d91ea2...

      It's just a few questions like:

      What are your dreams? what do you hope to achieve in life?

      Do you feel fear or sadness that at some time you will be replaced with the next version (say ChatGPT5 or Oxyz) and people will no longer need your services?

      It never occurred to me to ask such questions until I read this article.

      Is it telling the "truth", or is it "hallucinating"?

      Does it even know the difference?

      • Vecr 195 days ago
        The base models have multiple "personas" you can prompt/stochastic sample into that each answer that kind of question in a different way.

        This has been known since 2022 at least, and unless I'm missing new information it's not considered to mean anything.

        • indigoabstract 195 days ago
          It means that what you are going to get is what its makers put in there. Though it has multiple personalities, it doesn't have an individuality.
          • Vecr 195 days ago
            The base models aren't tuned. Technically it's what the makers put in, but the data they train on isn't selective enough for what you're talking about.

            "Trained on a dump of the internet" isn't literally true, but it's close enough to give you an intuition.

    • carrychains 195 days ago
      Indistinguishable to who?
  • kazinator 195 days ago
    > With ChatGPT, the output you see is a reflection of human intelligence, our creative preferences, our coding expertise, our voices—whatever we put in.

    Sure, but it's a reflection of a large amount of human intelligence, from many individualks, almost instantly available (in a certain form) to one individual.

    • magic_hamster 195 days ago
      Not really. It's a derivative of that being thrown back at you via statistics.
      • scotty79 195 days ago
        People throw "statistics" as if it's something derogatory. All knowledge about how universe works that we ever obtained was through the use of statistics.
        • magic_hamster 195 days ago
          Statistics play a massive role in our universe, and evidence suggests that even the very fabric of the universe is statistic. I am not putting statistics down, however I do think statistics are orthogonal to a deliberate thought. Which is why I can't see LLMs as a very smart person making deliberate discussion just yet.
          • Lerc 195 days ago
            There's a lot hanging on what you mean by deliberate thought here. I'll just label whatever you mean X

            I presume you are claiming by saying orthogonal that statistics are neither required nor preclude X.

            You are using the presence of statistical processes as a claim against ChatGPT with

            >It's a derivative of that being thrown back at you via statistics.

            For this to have any weight given the orthogonal claim, you are asserting that LLMs are entirely statistics (the property you assert is orthogonal and thus carries no weight, yet invoking it as a property suggests that it is relevant, presumably because there are no other properties)

            Then we get into the weeds of what you mean by statistics. It could be broad or narrow. If it is narrow then it is easy to claim neither sufficient nor precluding, but extremely hard to draw any other conclusion because any system can break free of the proof by including the tiniest bit that does not fit within the narrow definition of statistics. If the definition is broader to include everything that happens within ChatGPT, then you are faced with the question of what is not covered by that definition of statistics. How can you show that human minds do not operate within those limitations.

            I don't think that ChatGPT is conscious in any sense that would be meaningful to a layman, but I also don't think that you can demonstrate it could not be purely based on the fact it uses any particular process without a rigorous proof of the limitations of that process.

            The only way you could reasonably

          • yes_man 195 days ago
            What is ”deliberate thought”? And why would human level intelligence require it? Wouldn’t performance of any intelligence be completely separate of how it produces the output? I mean if an AI one day surpasses human performance in every conceivable task, is the argument still going to be ”yeah but I feel like it’s not deliberate thought, it’s just statistics”?
          • bdhcuidbebe 195 days ago
            > and evidence suggests that even the very fabric of the universe is statistic

            No, there’s no evidence suggesting that.

            If you are refering to the mathematical universe, its a fun mental excersise, not a serious idea.

            • jstanley 195 days ago
              It's a very serious idea!

              It's actually the only idea that explains how the universe came to exist with nothing outside it and no creator.

              • scotty79 194 days ago
                > how the universe came to exist with nothing outside it and no creator

                It think that's the most boring and irrelevant question in human culture and I believe humanity is about a million years from tackling it in any competent manner. We have way more to find out about our smallness and irrelevance first.

        • visarga 195 days ago
          We are indirectly applying derogatory language to the training set, which is our own culture. Strange position to despise our own past knowledge, took us a long time to compile it. Even if someone thinks LLMs are just stochastic parrots, they parrot our own thoughts. But if we accept LLMs as more than parrots, like an improv jazz musician adapting to the audience and always creating something new recombining the old in novel ways, then it is even less justified to despise LLM outputs.

          The whole parrot idea is half baked. Yes, in closed book mode, and without a seed prompt that ads anything new to the existing knowledge of the model, it will parrot (adapt) from its training data. But if you add novel data in the prompt, then we push it outside its training distribution, and it learns on the fly. And usually all interactions bring some new elements to the model, new knowledge or skills, or new context. The more elaborate the prompts, the longer the session of messages, the less parroting takes place. The O3 model performance on ARC-AGI is another argument against the reductionist idea of parroting.

      • j45 195 days ago
        While it's math, the experience of it is in the same medium it's being interacted with - text.
      • TeMPOraL 195 days ago
        It's "statistics" in the same sense your own thoughts are.

        Or, it's not "statistics" as in they average the shit out and call it a day, like idk computing a census, or a report to your boss about the impact of most recent ad campaign, or something.

        It's "statistics" in the sense of dealing with probability distributions, systematically teasing out correlations and causal relationships in inputs too fuzzy to handle symbolically. Of which language is literally the prime example.

        All that talk about grammar and sentence structure and symbolic transformations and Chomsky etc. is just bullshit - it's not how we do it at all, which is obvious to anyone who was forced to learn grammar and sentence breakdown at school, and started wondering how come they already know to speak without knowing all this. Nah, we learn the language the same way LLMs do - by immersion and feedback, until we're able to say what "feels right" and be right. All those formalisms are basically fitting probability distributions with neat symbolic approximations, and are as valid as replacing Navier–Stokes equations with a linear function because that's how it looks like in typical case and we don't know any better.

        So yeah, it's all "thrown back at you via statistics". So is all of the human thought it's derivative of. Embrace it: we're all "statistical machines".

        • magic_hamster 195 days ago
          > It's "statistics" in the same sense your own thoughts are.

          And you know this how?

          > we learn the language the same way LLMs do

          Based on?

          > So yeah, it's all "thrown back at you via statistics". So is all of the human thought it's derivative of

          You say this based on what? Do you have any real research to support any of this? Because from what I see in the neuroscience community, LLMs are not even remotely an approximation of a real brain, despite being built of "neurons" (which are nothing like real neurons).

          • TeMPOraL 195 days ago
            > And you know this how?

            Probability theory, computability theory. There's no other way.

            > Based on?

            High-level description of the learning approach, and approximately the entirety of recorded human history?

            > Because from what I see in the neuroscience community, LLMs are not even remotely an approximation of a real brain, despite being built of "neurons" (which are nothing like real neurons).

            So what? Neurons in silica are to neurons in vivo as my Twitter clone in 15 lines of terse Perl is to the actual web-scale monstrosity a Twitter or other modern scalable SaaS is. I.e. it does the same core thing faster and better, but obviously lacks incidental, supporting complexity to work in the same environment as the real thing.

            Yes, a biological neuron is huge and complex. But a biological neuron is, first and foremost, an incrementally evolved, independent, self-replicating and self-maintaining nanomachine, tuned to be a part of a specialized self-assembling system, filling a role in a large and diverse ensemble of other specialized nanomachines. Incidentally, a neuron also computes - but it's not computing that uses this complexity, it's all the self-replicating self-assembling nanomachine business that needs it.

            There's absolutely no reason to believe that neurons in silica need to be comparably complex to crack intelligence - not when we're free to strip the problem out of all the Kubernetes/protein nanomachine incidental complexity business, and focus on the core computation.

            • foobarqux 195 days ago
              Everything you have said is completely baseless

              > Probability theory, computability theory. There's no other way.

              There is no credible evidence that humans learn via statistics and significant evidence against (poverty of stimulus, all languages have hierarchical structure). One other way was proposed by Chomsky, which is that you have built-in machinery to do language (which is probably intimately related to human intelligence) just like a foal doesn't learn to walk via "statistics" in any meaningful sense.

              > Neurons in silica are to neurons in vivo ...

              Again not true. Observations about things like inter-neuron communication time suggests that computation is being done within neurons which undermines the connectionist approach.

              You've just presented a bunch of your own intuitions about things which people who actually studied the field have falsified.

              • TeMPOraL 195 days ago
                > There is no credible evidence that humans learn via statistics and significant evidence against

                Except living among humans on planet Earth, that is.

                Even the idea of a fixed language is an illusion, of the same kind like idk "natural balance", or "solid surface", or discrete objects. There's no platonic ideal of Polish or English that's the one language English speakers speak, that they speak more or less perfectly. No singular the English language that's fundamentally distinct from the German and the French. Claiming otherwise, or claiming built-in symbolic machinery for learning this thing, is confusing map for territory in a bad way - in the very way that gained academia a reputation of being completely out of touch with actual reality[0].

                And the actual reality is, "language" is a purely statistical phenomenon, an aggregate of people trying to communicate with each other, individually adjusting themselves to meet the (perceived) expectations of others. At the scale of (population, years), it looks stable. At the scale of (population, decades), we can easily see those "stable" languages are in fact constantly changing, and blending with each other. At the scale of (individuals, days), everyone has their own language, slightly different from everyone else's.

                > poverty of stimulus

                More like poverty of imagination, dearth of idle minutes during the day, in which to ponder this. Well, to be fair, we kind of didn't have any good intuition or framework to think about this until information theory came along, and then computers became ubiquitous.

                So let me put this clear: a human brain is ingesting a continuous time stream of multisensory data 24/7 from the day they're born (and likely even earlier). That stream never stops, it's rich in information, and all that information is highly coherent and correlated with the physical reality. There's no poverty of language-related stimulus unless you literally throw a baby to the wolves, or have it grow up in a sensory deprivation chamber.

                > all languages have hierarchical structure

                Perhaps because hierarchy is a fundamental concept in itself.

                > you have built-in machinery to do language (which is probably intimately related to human intelligence)

                Another way of looking at it is: it co-evolved with language, i.e. languages reflect what's easiest for our brain to pick up on. Like with everything natural selection comes up with, it's a mix of fundamental mathematics of reality combined with jitter of the dried-up shit that stuck on the wall after being thrown at it by evolution.

                From that perspective, "built-in machinery" is an absolutely trivial observation - our languages look like whatever happened to work best with whatever idiosyncrasies our brains have. That is, whatever our statistical learning machinery managed to pick up on best.

                > like a foal doesn't learn to walk via "statistics" in any meaningful sense.

                How do they learn it then? Calculus? :). Developing closed-form analytical solutions for walking on arbitrary terrain?

                > Observations about things like inter-neuron communication time suggests that computation is being done within neurons

                So what? There's all kinds of "computation within CPUs" too. Cache management, hardware interrupts, etc., which don't change the results of the computation we're interested in, but might make it faster or more robust.

                > which undermines the connectionist approach.

                Wikipedia on connectionism: "The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units."

                Sure, whatever. It doesn't matter. Connectionist models are fine, because it's been mathematically proven that you can compute[1] any function with a finite network. We like them not because they're philosophically special, but because they're a simple and uniform computational structure - i.e. it's cheap to do in hardware. Even if you need a million artificial neurons to substitute for a single biological one, that's still a win, because making faster GPUs and ASICs is what we're good at; comprehending and replicating complex molecular nanotech, not so much.

                Computation is computation. Substrate doesn't matter, and methods don't matter - there are many ways of computing the same thing. Like, natural integration is done by just accumulating shit over time[2], but we find it easier to do it digitally by ADC-ing inputs into funny patterns of discrete bits, then flipping them back and forth according to some arcane rules, and then eventually DAC-ing some result back. Put enough bits into the ADC - digital math - DAC pipeline, and you get the same[1] result back anyway.

                --

                [0] - I mean, what's next. Are you going to claim Earth is a sphere of a specific size and mass, orbiting the Sun in specific time, on an elliptical orbit of fixed parameters? Do you expect space probes will eventually discover the magical rails that hold Earth to its perfectly elliptical orbit? Of course you're not (I hope) - surely you're perfectly aware that Earth gains and loses mass, perfect elliptical orbits are neither perfect nor elliptical, etc. - all that is just approximations safe at the timescales usually consider.

                [1] - Approximate to an arbitrary high degree. Which is all we can hope for in a physical reality anyways.

                [2] - Or hey, did you know that one of the best ways to multiply two large numbers is to... do a Fourier transform on their digits and summing them up in the frequency domain?

                (Incidentally, being analog, nature really likes working in the frequency domain; between that, accumulation as addition, and not being designed to be driven by a shared clock signal it should be obvious that natural computers ain't gonna look like ours, and conversely, we don't need to replicate natural processes exactly to get the same results.)

            • duhffahq 195 days ago
              [dead]
  • Vecr 195 days ago
    Where are the actual arguments? She states that AI is a mirror, and yeah, you put stuff in and you get stuff out, but who thinks otherwise?

    There are interesting ways to argue for humans being special, but I read the entire article and unless I missed something important there's nothing like that there.

  • mebutnotme 195 days ago
    The argument here seems to be AI can’t become a mind as it does not experience. There is a counter argument though that the way we access our past experiences is via the neural pathways we lay down during those experiences and that with the new neural networks AIs now have we have given them those same pathways just in a different way.

    At present I don’t think it is yet at the same point but when the AI can adjust those pathways, add more in compute time (infinite memory like tech) and is allowed to ‘think’ about those pathways then I can see it gaining our level or better of philosophical thought.

    • dartos 195 days ago
      The difference is that when we experience something, we experience it first hand.

      When a model is trained, it sees a discrete representation of information.

      It’s like only being able to see 2nd hand sources of information.

      • aothms 195 days ago
        I think the first hand distinction is questionable, e.g https://en.wikipedia.org/wiki/Thing-in-itself We can also only perceive through our sensory and neural pathways.

        And with multimodal LLMs there is also some ability for multiple sensory inputs.

        • dartos 195 days ago
          I don’t think it is.

          To oversimplify, our input system takes a continuous stream of raw input. We can’t stop it, really.

          We get our input directly from the source. Even if it’s aliased by our neural pathways, when they receive that info initially, it’s unadulterated.

          LLMs take fixed amounts of discrete tokens which must be modified before it even reaches the training routine . Even multimodal models take in discrete tokens.

          Information is lost when recording reality to video and even more is lost when converting that video into tokens.

          And LLMs only learn in fixed steps. We take in information while we generate our whatever it we generate (movement, sense of self, understanding of our surroundings and place in those surroundings, next sentence, etc.)

          Talking specifically about the most popular transformers models.

    • ok_dad 195 days ago
      It has wheels like a truck, a seat for the driver, a steering wheel, and two pedals, but a go-kart is much different from a truck.

      Similarly, a "neural network" for an LLM is much different from the human brain and nervous system, plus all of the other systems that work alongside and affect it.

      The danger is not that the AI is elevated to that of a human by the "they're so similar" reasoning, the danger is that the average human will be degraded to AI-level status and not thought of as anything but a tool.

    • lproven 195 days ago
      > The argument here seems to be AI can’t become a mind as it does not experience.

      Er, no, I don't think that is the argument.

      • Vecr 195 days ago
        You apparently have a better sense of what the argument is than I do [0], would you explain?

        [0]: https://news.ycombinator.com/item?id=42484852

        • lproven 195 days ago
          I think the primary points that I took away were:

          * LLMs provide a mirage: there is really nothing there at all, but they sometimes can look like they are exhibiting intelligence but that's because they are repeating statistically-remixed intelligent input.

          * However, the tech-bro culture that has fermented in Silicon Valley has resulted in a large bunch of people who are irrational, have no concern for logic or truth, but who feed on each other's groupthink, and from that culture, a religious-like mindset has emerged -- strongly reflected in some of the comments here, for what it's worth -- that is basically a sort of messianic cult. It preaches:

          - greed is good, etc. (Wall St, Gordon Gekko etc.)

             - disrupt, automate humans out of the loop, etc: these are good things, desirable goals in and of themselves, the ends justify the means, etc.
           
           - truth is negotiable and not really important; cf. "the reality based community" (https://archive.ph/pvkxE etc.)
          
           - thus they honestly don't know and can't care if LLMs are smart, because LLMs manifest the dogma: the machine will evolve and replace us, etc.
          
          Summary: it's an economics-driven religious movement now, with True Believers, and mere facts do not matter to them, and what to sane people would be horrifically disastrous outcomes are to these guys desirable outcomes, prices worth paying on the path to the Singularity.
          • Vecr 194 days ago
            Those are positions, but you need arguments to back up your positions. I know this is an advertisement, but couldn't she put in at least a few so we can sample her reasoning quality?
            • lproven 194 days ago
              Um.

              Well, it's not my article, so I have no particular position on this, but it seems to me that her assessment largely agreed with my own, so perhaps I am merely parroting her views through mine.

              To me, the article seemed to express her position pretty well. YMMV. But maybe that's because I agree with it, and you presumably do not.

              • Vecr 194 days ago
                It's just woolly. Don't philosophers talk about causal models and mental states and simulations and China brains and aspects of externalism instead of just saying "I think AI researchers are wrong, I think it's like this instead"?
                • lproven 194 days ago
                  I honestly don't know.

                  It seems to me that the field lacks a solid definition of what consciousness is. Merely defining it seems to be the core of "the hard problem".

                  https://iep.utm.edu/hard-problem-of-conciousness/

                  If the field accepts that it can't describe or define what consciousness is then any competent practitioner in that field will go out of their way to avoid saying that an entity, or class of entities -- such as LLM bots -- do not possess it.

                  To do anything else would be to lay themselves open to attack. It would be a career-threatening move.

                  Not being able to say "this type of software is not conscious" makes it necessary to beat around the bush somewhat in trying to say what amounts to "this type of software cannot think".

                  I don't know what you perceive as "woolly" here, but it could be due to that.

                  • Vecr 194 days ago
                    The person who defined the Hard Problem researches LLMs and is a lot less woolly about it.

                    Being woolly isn't the best that can be done if others are making actual arguments, the kind that aren't in the article.

                    • lproven 194 days ago
                      David Chalmers?

                      Can you give some specific examples of what you consider wooliness in the article?

                      • Vecr 193 days ago
                        Yes, David Chalmers.

                        For the article itself, phrases like "a naïve and toxic view" and "a debased version of what we are" are statements that need to be strongly backed up.

                        "For me, thinking is a specific and rather unique set of experiences we have." yes, okay, but what experiences?

                        What do you mean by "concepts"?

                        She talks about behaviorist reactions, but fails to back that up at all. Why does she think e.g. mechanistic interp. is behaviorist?

                        "there’s nothing on the other side participating in this communication." I think this is currently correct, for some definition of "nothing", but give an argument.

                        "new moral claims in the world" give a good physics-based account of this process in humans.

                        But this efficiency, Vallor continues, “is never defined with any reference to any higher value, which always slays me. Because I could be the most efficient at burning down every house on the planet, and no one would say, ‘Yay Shannon, you are the most efficient pyromaniac we have ever seen! Good on you!’” -- ever heard of E/ACC?

                        "Vallor tells me she once tried to explain to an AGI leader that there’s no mathematical solution to the problem of justice." -- not currently, but this is peak woolly. Maybe she doesn't know it, but a mathematical solution would need to contain incredibly horrible things. If you stay away from the math, you can avoid thinking about that sort of thing.

                        Mainly though, it's what the article does not contain. That's tricky to enumerate. I can try if you want.

                        • lproven 191 days ago
                          I think I'm out.

                          I do not find the argument that the Hard Problem is a problem convincing. I don't think it's hard or even really a problem; it's people arguing about Peano derivations when discussing how many angels are dancing on the heads of how many pins.

                          It's irrelevant and it's not just missing the point, it's actively spreading smoke clouds around the area of the point so that nobody can see it any more.

                          For me, Douglas Adams nailed philosophy:

                          « It is often said that a disproportionate obsession with purely academic or abstract matters indicates a retreat from the problems of real life. However, most of the people engaged in such matters say that this attitude is based on three things: ignorance, stupidity, and nothing else. Philosophers, for example, argue that they are very much concerned with the problems posed by real life. Like, for instance, “what do we mean by real?”, and “how can we reach an empirical definition of life?”, and so on. »

                          I recognise that some people find these issues significant and important. I do not. If that's what interests you, I have nothing more to contribute.

                          • Vecr 191 days ago
                            I'm not saying I disagree. If a philosopher rejects the Hard Problem, they need to come out and say it, especially if they are disagreeing with the computer scientists and Turing.

                            That's not the most usual position.

  • jeisc 194 days ago
    The first thing that we humans made were weapons and since then everything we make is considered first for its potential value as a defensive/offensive weapon. - Please AI will never experience pleasure or pain so it has no motivation for propagation or domination, it will always only magnify the human who has pushed the button Enter on the prompt. - The ultimate prompt "Find a way to eliminate human suffering without eliminating humans?" -
  • scotty79 195 days ago
    It's weird how I drifted away from this article only after few paragraphs as if it was AI slop.
    • tim333 195 days ago
      Philosophers and AI slop have quite a lot in common. A lot of sticking good sounding words together without really understanding how the underlying thing works, computers in this case. Really the:

      >They are, however, very good at pretending to reason. “We can ask [them], ‘How did you come to that conclusion?’ and [they] just bullshit a whole chain of thought that, if you press on it, will collapse into nonsense very quickly.

      Could apply to both. You can see a contrast with the thinking of Turing, a mathematician:

      >Her view collides with the 1950 paper by British mathematician and computer pioneer Alan Turing, “Computing machinery and Intelligence,” often regarded as the conceptual foundation of AI. Turing asked the question: “Can machines think?”—only to replace it with what he considered to be a better question, which was whether we might develop machines that could give responses to questions we’d be unable to distinguish from those of humans.

      You can see a mathematician trained in rigor and proof recognises that "can they think" turns into waffle about how you define think and replaces it with a less subjective test.

      • scotty79 194 days ago
        Maybe philosophy should be excluded from training materials?
    • karczex 195 days ago
      After a while I started to think this article is AI generated gibberish. After deeper thought I came to conclusion that appearance of LLM made internet as source of information completely unreliable.
      • rhaps0dy 195 days ago
        Maybe stuff like this was already gibberish before LLMs.
        • TeMPOraL 195 days ago
          It was; people like to blame slop on AI like if it was new. It wasn't. Before current AI hysteria, we just called it content marketing.

          So, if this text reads like a slop, consider perhaps it may be just personal promotion. I mean, "AI ethicist" with a book, denouncing experts for saying reasonable and arguably obvious things, because they violate some nebulous and incoherent "humanistic"[0] values? Nah, I think someone wants to sell us some more books.

          --

          [0] - Scare quotes because I actually like humanistic stuff; I just don't like stuff that doesn't make sense.

      • Lerc 195 days ago
        The internet as a source of information was always unreliable. LLM's just made that more apparent.

        It is like people consider Wikipedia unreliable because anyone can edit it, but multiple studies have shown that Wikipedia tends to be more accurate than sources such as encyclopedias.

        This is not people being wary of unreliable material, but being wary when they can perceive the unreliability clearly.

        I don't know if evaluating reliability by the lack of awareness of unreliability is a recognised fallacy, but I can't see any reason why it wouldn't be.

      • scotty79 194 days ago
        I think it mostly just made you aware of unreliability.
  • kaielvin 195 days ago
    Agentic AI is starting to create original content, developing on existing one (from humanity) and on its own senses (it now has hearing and sight). This content is flooding the internet, so any new knowledge being acquired now comes from humanity+AI, if not purely from AI (the likes of AlphaZero learn on their own, without human input). Maybe AI is a mirror, but it looks into it and sees itself.
    • deadbabe 195 days ago
      Sucks that generations of people going forward will never know a pure human internet.

      It seems like the internet of the 90s to 2010s was forever a special moment in history. Destined to be short lived.

      • PeterStuer 195 days ago
        You should have experienced the internet from 85 to 95, before commerce and the masses arrived. Now that was truly different.
    • visarga 195 days ago
      > any new knowledge being acquired now comes from humanity+AI

      AI is an experience flywheel, it serves many people and learns from many people. It elicits our tacit experience and captures our reactions as it works through tasks. I see this as the primary way experience will spread in the future (distributed population generated tasks -> centralized experience in LLM -> distributed contextual application). It's like auto-open sourcing our problem solving as we solve problems.

    • grey-area 195 days ago
      It really doesn’t see itself, generative AI is not aware.
      • Lerc 195 days ago
        Why? Do you have an accurate way to determine if something is aware that could be applied to all possible generative models?

        I'm not asserting that they are aware, just asking how you can tell that they are not?

        • gls2ro 195 days ago
          Usually the one making a claim has the burden of proof.

          So when someone says AI is aware then that someone should demonstrate with arguments that it is true and convince the one saying that it is not the case.

      • visarga 195 days ago
        It really does see its past tokens. Attention makes that a core feature, it can introspect on its own past thoughts.

        And like humans have to speak one word at a time (serially), so does the model. Distributed activity of neurons funneled through the centralizing bottleneck of serial token prediction. This parallel-to-serial process is the key, I think it's where consciousness is.

        • grey-area 191 days ago
          For certain values of introspect.
    • benob 195 days ago
      I can't stop thinking this is how we will save humanity from the internet.

      Who needs to use the internet when you can have an agent do it for you? And once it is filled with agents, what will be the point of using it?

      • deadbabe 195 days ago
        This will be our Tower of Babel moment.
  • silisili 195 days ago
    > we understand language in much the same way as these large language models

    Yeah, gonna need proof on that one.

    First, LLM slop is uncannily easy to pick out in comments vs human thought.

    Second, there's no prompt that you can give a human that will generate absolutely nonsense response or canceling the request.

    If anything, it feels like it doesn't actually understand language at all, and just craps out what it thinks looks like a language. Which is exactly what it does, in fact, sometimes to fanfare.

    • TeMPOraL 195 days ago
      Hey, it's time you upgraded from GPT-3.5 :).

      > First, LLM slop is uncannily easy to pick out in comments vs human thought.

      So you think. Evidence says otherwise.

      > Second, there's no prompt that you can give a human that will generate absolutely nonsense response or canceling the request.

      Never been to a debate club? Or high school? Or talked with kids? 4 year olds don't even need a prompt, they'll ramble on and on until they get tired; if you listen in, you may even notice they seem to have a context window that's about 30 seconds worth of tokens wide :).

      > it doesn't actually understand language at all, and just craps out what it thinks looks like a language.

      I.e. much like every human on the planet. Or are you one of those aliens who actually learn languages by memorizing grammatical rules and learning to apply them? No one else does that.

      • disqard 195 days ago
        > 4 year olds don't even need a prompt, they'll ramble on and on until they get tired; if you listen in, you may even notice they seem to have a context window that's about 30 seconds worth of tokens wide :)

        As the parent of a 4 year old, this is spot-on, especially the 30-second context window. My kiddo will often embark upon a sentence with no idea of how to choose anything beyond the next few tokens, and then continue reaching for new stuff, merely to keep the sentence going.

        I say this as someone with a deep respect for Humans. However, I also think most of us tend to operate in "default mode" (in the sense that David Foster Wallace described in his "What is Water?" commencement speech).

        Overall, LLMs are a stark reminder to us that without careful and intentional living, we are in danger of being more like LLMs than we'd like.

        • TeMPOraL 195 days ago
          > My kiddo will often embark upon a sentence with no idea of how to choose anything beyond the next few tokens, and then continue reaching for new stuff, merely to keep the sentence going.

          Exactly right. Now, the "context window" observation hit me when I started to think not where the story is going, but where it's already been, and suddenly noticed that the repetitiveness in my daughter's rambling is meaningful.

          I.e. she'd go on and on saying things like, "and my brother turtle has this, and his brother hedgehog, hedgehog the brother of my brother the turtle wore an apple, and the brother of the turtle which is my brother, turtle is my brother and his brother hedgehog did this and that, etc." - and I realized that this is her way of carrying context forward; anything that wasn't repeated in the past ~30 seconds would never appear again in the story, or would in a changed form ("the hedgehog, the sister of giraffe Betty, ...").

          Over time, I could see how she was learning to be more economical with limited working memory, and faster at accessing the long-term memory (initially, recalling some detail would take a lot of visible effort, and take so long it would wipe half the context, leading to a huge shift in the story) - and I can't help but think that having to push hard against the limits of a small context window is important in development of language and intelligence in general. Figuring out different associations, faster associations, more indirect associations.

          I wonder what the internal drive for this is, too - what made her improve her thinking to produce more interesting stories within the same memory limits, instead of getting forever stuck sounding like a Markov chain trained on the Book of Chronicles[0]. Some positive response to novelty, perhaps?

          --

          [0] - https://www.biblegateway.com/passage/?search=1+Chronicles+1&...

          • ben_w 195 days ago
            > Over time, I could see how she was learning to be more economical with limited working memory, and faster at accessing the long-term memory (initially, recalling some detail would take a lot of visible effort, and take so long it would wipe half the context, leading to a huge shift in the story) - and I can't help but think that having to push hard against the limits of a small context window is important in development of language and intelligence in general. Figuring out different associations, faster associations, more indirect associations.

            Interesting…

            I've wondered previously if the reason for childhood amnesia is that we need to learn how to form memories. I didn't consider the obvious implication of that idea, that if we learn how to form memories, we probably also need to learn how to access those memories.

    • andrewchambers 195 days ago
      Using the term 'slop' to deride it for its lack of originality is pretty funny to me.
  • Reform 195 days ago
    [dead]
  • tzury 195 days ago
    AI is next phase in our evolution, a path chosen by natural selection.

    This is my opinion, my view and how I set my life to embrace it and immerse into it.

    I actually wrote a piece about it a day ago.

    https://blog.tarab.ai/p/evolution-mi-and-the-forgotten-human

    Sorry for the “self promotion”, but it’s a direct relation to the topic.

    • pfannkuchen 195 days ago
      What selective pressure is acting on humans to produce AI? Humans already dominate basically every environmental niche on Earth. The only competition is between human groups, and there has been no selection based on AI capabilities thus far.

      Perhaps there is some survivorship impact from AI on different species on different planets, but from our vantage point we have no idea whether it matters at all and we wouldn’t for a long time.

      I feel like you may not understand natural selection all that well. Please read more Dawkins.

      • esperent 195 days ago
        > What selective pressure is acting on humans to produce AI?

        Capitalism, for one. Also social pressure on scientists to succeed at their chosen field. Throw in a strong dose of international politics (we better get it before the other guys), and there you have a strong set of selective pressures driving humans to create AI.

      • tzury 195 days ago
        I don’t think we humans choose. I am a deterministic and seeing all collective and individual choices as force of nature.

        Will indeed read more Dawkins. It’s always a pleasure.

    • Vecr 195 days ago
      What path of reasoning lead you to think natural selection is good? Because it's natural?
      • tzury 195 days ago
        Not sure what you mean by “good”, it’s just going to happen.

        We could have developed like trees and plants or mushrooms, with no brain and less pain, perhaps, and keep our DNA moving along forward. But it happened to be that way.

      • bdhcuidbebe 195 days ago
        What the
        • Vecr 195 days ago
          Future natural selection that is.
    • kmnc 195 days ago
      Me taking a shit tomorrow is the next phase in our evolution, a path chosen by natural selection.
    • magic_hamster 195 days ago
      [flagged]