Two Concepts of Intelligence

(cacm.acm.org)

57 points | by 1970-01-01 5 days ago

19 comments

  • barishnamazov 8 hours ago
    The turkey is fed by the farmer every morning at 9 AM.

    Day 1: Fed. (Inductive confidence rises)

    Day 100: Fed. (Inductive confidence is near 100%)

    Day 250: The farmer comes at 9 AM... and cuts its throat. Happy thanksgiving.

    The Turkey was an LLM. It predicted the future based entirely on the distribution of the past. It had no "understanding" of the purpose of the farmer.

    This is why Meyer's "American/Inductive" view is dangerous for critical software. An LLM coding agent is the Inductive Turkey example. It writes perfect code for 1000 days because the tasks match the training data. On Day 1001, you ask for something slightly out of distribution, and it confidently deletes your production database because it added a piece of code that cleans your tables.

    Humans are inductive machines, for the most part, too. The difference is that, fortunately, fine-tuning them is extremely easy.

    • funkyfiddler69 1 hour ago
      > The difference is that, fortunately, fine-tuning them is extremely easy.

      Because of millions of years of generational iterations, by which I mean recursive teaching, learning and observing, the outcomes of which all involved generations perceive, assimilate and adapt to in some (multi-) culture- and sub-culture driven way that is semi-objectively intertwined with local needs, struggles, personal desires and supply and demand. All that creates a marvelous self-correcting, time-travelling OODA loop. []

      Machines are being finetuned by 2 1/2 generations abiding by exactly one culture.

      Give it time, boy! (effort put into/in over time)

      [] https://en.wikipedia.org/wiki/OODA_loop

    • aleph_minus_one 5 hours ago
      > The difference is that, fortunately, fine-tuning them is extremely easy.

      If this was true, educating people fast for most jobs would be a really easy and solved problem. On the other hand in March 2018, Y Combinator put exactly this into its list of Requests for Startups, which gives strong evidence that this is a rather hard, unsolved problem:

      > https://web.archive.org/web/20200220224549/https://www.ycomb...

      • graemep 2 hours ago
        The problem with education is that existing ways of doing things are very strongly entrenched.

        At the school level: teachers are trained, buildings are built, parents rely on kids being at school so they can go out to work....

        At higher levels and in training it might be easier to change things, but IMO it is school level education that is the most important for most people and the one that can be improved the most (and the request for startups reflects that).

        I can think of lots of ways things can be done better. I have done quite a lot of them as a home educating parent. As far as I can see my government (in the UK) is determined to do the exact opposite of the direction I think we should go in.

        • Nevermark 1 hour ago
          > The problem with education is that existing ways of doing things are very strongly entrenched.

          Which is still a problem of educating humans. Just moved up the chain one step. Educators are often very hard to educate.

          Even mathematics isn't immune to this. Calculus is pervasively taught with prematurely truncated algebra of differentials. Which means for second order derivatives and beyond, the "fraction" notation does not actually describe ratios, when this does not need to be the case.

          But when will textbooks remove this unnecessary and complicating disconnect between algebra and calculus? There is no significant movement to do so.

          Educators and textbook writers are as difficult to educate as anyone else.

        • sdenton4 1 hour ago
          The one true result of education research is that one on one education is vastly more effective than classroom education.

          While I have no doubt you had good results home schooling, you will almost certainly run into difficulty scaling your results.

          • graemep 57 minutes ago
            Not as much as you might think for two reasons.

            1. Kids need far fewer hours of one on one than classroom teaching

            2. There is much greater proportion of self teaching, especially as kids get older.

            I estimate adult time required per child is similar to schools with small class sizes, and it requires somewhat less skilled adults.

      • armchairhacker 5 hours ago
        Easier than to an LLM, compared to inference.

        “‘r’s in strawberry” and other LLM tricks remind me of brain teasers like “finished files” (https://sharpbrains.com/blog/2006/09/10/brain-exercise-brain...). Show an average human this brain teaser and they’ll probably fall for it the first time.

        But never a second; the human learned from one instance, effectively forever, without even trying. ChatGPT had to be retrained and to not fall for the “r”’s trick, which cost much more than one prompt, and (unless OpenAI are hiding a breakthrough, or I really don’t understand modern LLMs) required much more than one iteration.

        That seems to be the one thing that prevents LLMs from mimicking humans, more noticeable and harder to work around than anything else. An LLM can beat a Turing test where it only must generate a few sentences. No LLM can imitate human conversation over a few years (probably not even a few days), because it would start forgetting much more.

    • usgroup 7 hours ago
      This issue happens at the edge of every induction. These two rules support their data equally well:

      data: T T T T T T F

      rule1: for all i: T

      rule2: for i < 7: T else F

      • p-e-w 7 hours ago
        That’s where Bayesian reasoning comes into play, where there are prior assumptions (e.g., that engineered reality is strongly biased towards simple patterns) which make one of these hypotheses much more likely than the other.
        • usgroup 7 hours ago
          yes, if you decide one of them is much more likely without reference to the data, then it will be much more likely :)
          • wasabi991011 2 hours ago
            Deciding that they are both equally likely is also a deciding a prior.

            Yes, "equally likely" is the minimal information prior which makes it best suited when you have no additional information. But it's not unlikely that have some sort of context you can use to decide on a better prior.

            • usgroup 1 hour ago
              Well that would be extra information. Wherever you find the edge of your information, you will find the "problem of induction" as presented above.
    • mirekrusin 6 hours ago
      AGI is when turkey cuts farmer's throat on day 249, gets on farmer's internet, makes money on trading and retires on an island.
    • naveen99 7 hours ago
      LLM’s seem to know about farmers and turkeys though.
    • myth_drannon 6 hours ago
      "fine-tuning them is extremely easy." Criminal courts, jails, mental asylums beg to disagree.
      • marci 6 hours ago
        "finetune"

        Not

        "Train from scratch"

    • p-e-w 7 hours ago
      > The Turkey was an LLM. It predicted the future based entirely on the distribution of the past. It had no "understanding" of the purpose of the farmer.

      But we already know that LLMs can do much better than that. See the famous “grokking” paper[1], which demonstrates that with sufficient training, a transformer can learn a deep generalization of its training data that isn’t just a probabilistic interpolation or extrapolation from previous inputs.

      Many of the supposed “fundamental limitations” of LLMs have already been disproven in research. And this is a standard transformer architecture; it doesn’t even require any theoretical innovation.

      [1] https://arxiv.org/abs/2301.02679

      • barishnamazov 6 hours ago
        I'm a believer that LLMs will keep getting better. But even today (which might or might not be "sufficient" training) they can easily run `rm -rf ~`.

        Not that humans can't make these mistakes (in fact, I have nuked my home directory myself before), but I don't think it's a specific problem some guardrails can solve currently. I'm looking for innovations (either model-wise or engineering-wise) that'd do better than letting an agent run code until a goal is seemingly achieved.

      • encyclopedism 6 hours ago
        LLM's have surpassed being Turing machines? Turing machines now think?

        LLM's are known properties in that they are an algorithm! Humans are not. PLEASE at the very least grant that the jury is STILL out on what humans actually are in terms of their intelligence, that is after all what neuroscience is still figuring out.

    • glemion43 7 hours ago
      You clearly underestimate the quality of people I have seen and worked with. And yes guard rails can be added easily.

      Security is my only concern and for that we have a team doing only this but that's also just a question of time.

      Whatever LLMs ca do today doesn't matter. It matters how fast it progresses and we will see if we still use LLMs in 5 years or agi or some kind of world models.

      • barishnamazov 6 hours ago
        > You clearly underestimate the quality of people I have seen and worked with.

        I'm not sure what you're referring to. I didn't say anything about capabilities of people. If anything, I defend people :-)

        > And yes guard rails can be added easily.

        Do you mean models can be prevented to do dumb things? I'm not too sure about that, unless a strict software architecture is engineered by humans where LLMs simply write code and implement features. Not everything is web development where we can simply lock filesystems and prod database changes. Software is very complex across the industry.

      • bdbdbdb 7 hours ago
        > You clearly underestimate the quality of people I have seen and worked with

        "Humans aren't perfect"

        This argument always comes up. The existence of stupid / careless / illiterate people in the workplace doesn't excuse spending trillions on computer systems which use more energy than entire countries and are yet unreliable

  • jimbokun 1 hour ago
    Part of the confusion stems from implicitly pulling in the concept of consciousness into the definition of intelligence.

    There is an interiority to our thought life. At least I know there is for myself, because I know what it's like to experience the world as me. I assume that other humans have this same kind of interiority as me, because they are humans like me. And then animals to greater or lesser extent based on how similarly they behave or sense the world around them to humans.

    But if there is an "interiority" for LLMs, it must be very very different to humans. The reasoning of an LLM springs into existence for every prompt, then goes away entirely for the next prompt, starting over again from scratch.

    Yes this is an over simplification. The LLM has been trained with all kinds of knowledge about the world that persists between invocations. But if the floating point numbers are just sitting there on a disk or other storage medium, it doesn't seem possible that it could be experiencing anything until called into use again.

    And the strangeness of the LLM having a completely transformed personality and biases based solely on a few sentences in a prompt. "You are a character in the Lord of the Rings..."

    I think this is the sense in which many people argue that an LLM is not "intelligent". It's really an argument that an LLM does not experience the world anything like the way a human being does.

  • ghgr 7 hours ago
    I agree with Dijkstra on this one: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
    • tucnak 7 hours ago
      I really wish all these LessWrong, what is the meaning of intelligence types cared enough to study Wittgenstein a bit rather than hear themselves talk; it would save us all a lot of time.
      • encyclopedism 6 hours ago
        I fully agree with your sentiments. People really need to study a little!
  • notarobot123 7 hours ago
    Memory foam doesn't really "remember" the shape of my rear end but we all understand the language games at play when we use that term.

    The problem with the AI discourse is that the language games are all mixed up and confused. We're not just talking about capability, we're talking about significance too.

  • aklein 5 hours ago
    This article highlights how experts disagree on the meaning of (non-human) intelligence, but it dismisses the core problem a bit too quickly imo -

    “LLMs only predict what a human would say, rather than predicting the actual consequences of an action or engaging with the real world. This is the core deficiency: intelligence requires not just mimicking patterns, but acting, observing real outcomes, and adjusting behavior based on those outcomes — a cycle Sutton sees as central to reinforcement learning.” [1]

    An LLM itself is a form of crystallized intelligence, but it does not learn and adapt without a human driver, and that to me is a key component of intelligent behavior.

    [1] https://medium.com/@sulbha.jindal/richard-suttons-challenge-...

  • torginus 6 hours ago
    This reads like your standard issue prestige publication essay, as in its an exercise to name drop as many famous people and places the author was involved with.

    The whole purpose is not to inform or provoke thought, but for the whole thing to exude prestige and exclusivity, like an article you'd find in a magazine in a high-end private clinics waiting room.

    • jimbokun 1 hour ago
      It's summarizing decades of research and arguments into the nature of intelligence and computing machines for a lay audience. Which is a laudable endeavor.
    • simianwords 6 hours ago
      This is making me rethink a lot of things I read
  • bsenftner 7 hours ago
    Intelligence as described is not the entire "requirement" for intelligence. There are probably more layers here, but I see "intelligence" as the 2nd layer, and beneath that later is comprehension which is the ability to discriminate between similar things, even things trying to decieve you. And at layer zero is the giant mechanism pushing this layered form of intelligence found in living things is the predator / prey dynamic that dictates being alive or food for something remaining alive.

    "Intelligence in AI" lacks any existential dynamic, our LLMs are literally linguistic mirrors of human literature and activity tracks. They are not intelligent, but for the most part we can imagine they are, while maintaining sharp critical analysis because they are idiot savants in the truest sense.

  • sebastianmestre 7 hours ago
    This is kind of bait-and-switch, no?

    The author defines American style intelligence as "the ability to adapt to new situations, and learn from experience".

    Then argues that the current type of machine-learning driven AI is American style-intelligent because it is inductive, which is not what was supposedly (?) being argued for.

    Of course current AI/ML models cannot adapt to new situations and learn from experience, outside the scope of its context window, without a retraining or fine-tuning step.

    • jimbokun 1 hour ago
      The retraining or fine-tuning step is the added experience.
    • lilgreenland 1 hour ago
      I don't see a reason to separate training when we evaluate AI intelligence.
  • anonymous908213 7 hours ago
    Two concepts of intelligence and neither have remotely anything to do with real intelligence, academics sure like to play with words. I suppose this is how they justify their own existence; in the absence of being intelligent enough to contribute anything of value, they must instead engage in wordplay that obfuscates the meaning of words to the point nobody understands what the hell they're talking about, and confuses the lack of understanding of what they're talking about for the academics being more intelligent than the reader.

    Intelligence, in the real world, is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. Probabilistic prediction is inherently incompatible with deterministic deduction. We're years into being told AGI is here (for whatever squirmy value of AGI the hype huckster wants to shill), and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call. How is it that we can go about ignoring reality for so long?

    • anonymous908213 7 hours ago
      Addendum:

      > With recent advances in AI, it becomes ever harder for proponents of intelligence-as-understanding to continue asserting that those tools have no clue and “just” perform statistical next-token prediction.

      ??????? No, that is still exactly what they do. The article then lists a bunch of examples in which this in trivially exactly what is happening.

      > “The cat chased the . . .” (multiple connections are plausible, so how is that not understanding probability?)

      It doesn't need to "understand" probability. "The cat chased the mouse" shows up in the distribution 10 times. "The cat chased the bird" shows up in the distribution 5 times. Absent any other context, with the simplest possible model, it now has a probability of 2/3 for the mouse and 1/3 for the bird. You can make the probability calculations as complex as you want, but how could you possibly trout this out as an example that an LLM completing this sentence isn't a matter of trivial statistical prediction? Academia needs an asteroid, holy hell.

      [I originally edited this into my post, but two people had replied by then, so I've split it off into its own comment.]

      • n4r9 7 hours ago
        One question is how do you know that you (or humans in general) aren't also just applying statistical language rules, but are convincing yourself of some underlying narrative involving logical rules? I don't know the answer to this.
        • anonymous908213 6 hours ago
          We engage in many exercises in deterministic logic. Humans invented entire symbolic systems to describe mathematics without any prior art in a dataset. We apply these exercises in deterministic logic to reality, and reality confirms that our logical exercises are correct to within extremely small tolerances, allowing us to do mind-boggling things like trips to the moon, or engineering billions of transistors organized on a nanometer scale and making them mimick the appearance of human language by executing really cool math really quickly. None of this could have been achieved from scratch by probabilistic behaviour modelled on a purely statistical analysis of past information, which is immediately evident from the fact that, as mentioned, an LLM cannot do basic arithmetic, or any other deterministic logical exercise in which the answer cannot be predicted from already being in the training distribution, while we can. People will point to humans sometimes making mistakes, but that is because we take mental shortcuts to save energy. If you put a gun to our head and say "if you get this basic arithmetic problem wrong, you will die" we will reason long enough to get it right. People try prompting that with LLMs, and they still can't do it, funnily enough.
    • dcre 6 hours ago
      I just don’t think the question is about determinism and probability at all. When we think, our thoughts are influenced by any number of extra-logical factors, factors that operate on a level of abstraction totally alien to the logical content of thought. Things like chemical reactions in our brains or whether the sun is out or whether some sound distracts us or a smell reminds us of some memory. Whether these factors are deterministic or probabilistic is irrelevant — if anything the effect of these factors on our thinking is deterministic. What matters is that the mechanical process of producing thought is clearly influenced (perhaps entirely!) by non-rational factors. To me this means that any characterization of the essence of thinking that relies too heavily on its logical structure cannot be telling the whole story.
    • jimbokun 1 hour ago
      Inductive logic is not synonymous with intelligence.
    • bdbdbdb 7 hours ago
      I keep coming back to this. The most recent version of chatgpt I tried was able to tell me how many letter 'r's were in a very long string of characters only by writing and executing a python script to do this. Some people say this is impressive, but any 5 year old could count the letters without knowing any python.
      • williamcotton 7 hours ago
        How is counting not a technology?

        The calculations are internal but they happen due to the orchestration of specific parts of the brain. That is to ask, why can't we consider our brains to be using their own internal tools?

        I certainly don't think about multiplying two-digit numbers in my head in the same manner as when playing a Dm to a G7 chord that begs to resolve to a C!

      • armchairhacker 6 hours ago
        The 5-year old counts with an algorithm: they remember the current number (working memory, roughly analogous to context), scan the page and move their finger to the next letter. They were taught this.

        It's not much different than ChatGPT being trained write a to Python script.

        A notable difference is that it's much more efficient to teach something new to a 5-year old than fine-tune or retrain an LLM.

        • notahacker 53 minutes ago
          A theory behind LLM intelligence is that the layer structure forms some sort of world model that has a much higher fidelity than simple pattern matching texts. In specific cases, like where the language is a DSL which maps perfectly to a representation of an Othello gameboard, this appears to actually be the case. But basic operations like returning the number of times the letter r appears in 'strawberry' form a useful counterexample: the LLM has ingested many hundreds of books explaining how letters spell out words and how to count (which are pretty simple concepts very easily stored in small amounts of computer memory) and yet its layers apparently couldn't model it from all that input (apparently an issue with being unable to parse a connection between the token 'strawberry' and its constituent letters... not exactly high-level reasoning).

          It appears LLMs got RHLFed into generating suitable Python scripts after the issue was exposed, which is an efficient way of getting better answers, but feels rather like handing the child really struggling with their arithmetic a calculator...

    • djoldman 7 hours ago
      Many people would require an intelligent entity to successfully complete tasks with non-deterministic outputs.
    • messe 6 hours ago
      > Probabilistic prediction is inherently incompatible with deterministic deduction

      Prove that humans do it.

    • satisfice 7 hours ago
      Intelligence is not just about reasoning with logic. Computers are already made to do that.

      The key thing is modeling. You must model a situation in a useful way in order to apply logic to it. And then there is intention, which guides the process.

      • anonymous908213 7 hours ago
        Our computer programs execute logic, but cannot reason about it. Reasoning is the ability to dynamically consider constraints we've never seen before and then determine how those constraints would lead to a final conclusion. The rules of mathematics we follow are not programmed into our DNA; we learn them and follow them while our human-programming is actively running. But we can just as easily, at any point, make up new constraints and follow them to new conclusions. What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
        • satisfice 3 hours ago
          Executing logic is deductive reasoning. But, yes, I get it. There are also other layers of reasoning, and other forms. For instance, abductive and inductive inference.
  • svilen_dobrev 7 hours ago
    intelligence/understanding is when one can postulate/predict/calculate/presume something correctly, from concepts about it, without that thing (or similar) ever been in the training/past (or even ever-known).

    Yeah, not all humans do it. It's too energy expensive, biological efficiency wins.

    As of ML.. Maybe next time, when someone figures out how to combine deductive with inductive, in zillion small steps, with falsifying built-in.. (instead of confronting them 100% one against 100% the other)

  • torginus 6 hours ago
    I do have to note that the guy writing this is the father of 'modern' OOP industry (the one with endless books about design patterns, UMLs, 'clean code'), something I hope feels like a shameful bit of history of our profession to the current generation of engineers, not something they actively have to engage with.
  • macleginn 8 hours ago
    Marx is fair play, but one of the most prominent cases of understanding everything in advance is undoubtedly Chomsky's theory of innate/universal grammar, which became completely dominant on guess which side of the pond.
  • kgwxd 1 hour ago
    Every single word around actual intelligence has been hijacked by the industry. Everything should be prefixed with "Artificial", like vegan food that mimics other food. Or just use more accurate words. Or make them up if they don't exist.

    Every thing I used to come to HN for (learning, curiosity, tech, etc) has been mostly replaced by an artificial version of the thing. I still get tricked daily by titles I think are referring a real thing, but turn out to be about AI.

  • yogthos 7 hours ago
    I'd argue you can have much more precise definition than that. My definition of intelligence would be a system that has an internal of a particular domain, and it uses this simulation to guide its actions within that domain. Being able to explain your actions is derived directly from having a model of the environment.

    For example, we all have an internal physics model in our heads that's build up through our continuous interaction with our environment. That acts as our shared context. That's why if I tell you to bring me a cup of tea, I have a reasonable expectation that you understand what I requested and can execute this action intelligently. You have a conception of a table, of a cup, of tea, and critically our conception is similar enough that we can both be reasonably sure we understand each other.

    Incidentally, when humans end up talking about abstract topics, they often run into exact same problem as LLMs, where the context is missing and we can be talking past each other.

    The key problem with LLMs is that they currently lack this reinforcement loop. The system merely strings tokens together in a statistically likely fashion, but it doesn't really have a model of the domain it's working in to anchor them to.

    In my opinion, stuff like agentic coding or embodiment with robotics moves us towards genuine intelligence. Here we have AI systems that have to interact with the world, and they get feedback on when they do things wrong, so they can adjust their behavior based on that.

  • HarHarVeryFunny 4 hours ago
    I don't see this article doing anything to help define intelligence in a useful way.

    1) Defining "intelligence" as ability to "understand" isn't actually defining it at all, unless you have a rigorous definition of what it means to understand. It's basically just punting the definition from one loosely defined concept to another.

    2) The word "intelligence", in common usage, is only loosely defined, and heavily overloaded, and you'll get 10 different definitions if you ask 10 different people. It's too late to change this, since the meaning of words comes from how they are used. If you want to know the various ways the word is used then look in a dictionary. These are literally the meanings of the word. If you want something more precise then you are not looking for the meaning of the word, but rather trying to redefine it.

    3) When we talk about "intelligence" with regards to AI, or AGI, it seems that what people really want to do is to define a new word, something like "hard-intelligence", something rigorously defined, that would let us definitively say whether, or to what degree, an "intelligent" system (animal or machine) has this property or not.

    Of course to be useful, this new word "hard-intelligence" needs to be aligned with what people generally mean by "intelligence", and presumably in the future the one of the dictionary senses of "intelligence" will be hard-intelligence.

    I think the most useful definition of this new word "hard-intelligence" is going to be a functional one - a capability (not mechanism) of a system, that can be objectively tested for, even with a black box system. However, since the definition should also align with that of "intelligence", which historically refers to an animal/human capability, then it seems useful to also consider where does this animal capability come from, so that our definition can encompass that in most fundamental way possible.

    So, with that all said, here's how I would define "hard-intelligence", and why I would define it this way. This post is already getting too long, so I'll keep it brief.

    The motivating animal-based consideration for my definition is evolution, and what is the capability that animals evolved to possess intelligence (to varying degrees) have that other animals do not, and what survival benefit does this bring that compensates for the huge cost of large brains in animals with advanced intelligence?

    I consider the essence of evolved animal intelligence to be prediction, which means that the animal is not restricted to reacting to the present, but also can plan for the predicted future, which obviously has massive survival benefit - being able to predict where the food and water will be, how the predator is going to behave, etc, etc.

    The mechanics of how functional prediction has evolved in different animals varies from something like a fly, whose hard-coded instincts help it avoid predicted swats (that looming visual input predicts I'm about to be swatted by the cow's tail, so I better move), all the way to up to species like ourselves where we can learn predictive signals, outcomes, and adaptive behaviors, rather than these being hard coded. It is widely accepted that our cortex (and equivalent in birds) is basically a prediction machine, which has evolved under selection pressure of developing this super-power of being able to see into the future.

    So, my definition of "hard-intelligence" is degree of ability to use, and learn from, past experience to successfully predict the future.

    That's it.

    There are of course some predictive patterns, and outcomes, that are simple to learn and recognize, and others that are harder, so this is a matter of degree and domain, etc, but at the end of the day it's an objective measure that can be tested for - given the same experiential history to learn from, can different systems correctly predict the continuations of new inputs that follow a similar pattern.

    This definition obviously captures the evolutionary super-power of predicting the future, which is at least one of the things that intelligent animals can do, but my assertion, on which the utility of this definition of "hard-intelligence" is based, is that prediction is in fact the underlying mechanism of everything that we consider as "intelligent" behavior. For example, reasoning and planning is nothing more than predicting the outcomes of a sequence of hypothetical what-ifs.

    tl/dr - "intelligence" is too fuzzy of a concept to be useful. We need to define a new rigorously defined word to discuss and measure machine (and animal) intelligence. I have suggested a definition.

  • DonHopkins 4 hours ago
    [dead]
  • Lucasjohntee 7 hours ago
    [flagged]
  • dcre 6 hours ago
    I had high hopes for this essay because I’ve tried many times to get people online to articulate what they mean by “it doesn’t really understand, it only appears to understand” — my view is that all these arguments against the possibility of LLMs thinking apply equally well to human beings because we don’t understand the process that produces human thinking either.

    But the essay is a huge letdown. The European vs. American framing obscures more than it illuminates. The two concepts of intelligence are not really analyzed at all — one could come up with interpretations under which they’re perfectly compatible with each other. The dismissal of Marx and Freud, two of the deepest thinkers in history, is embarrassing, saying a lot more about the author than about those thinkers.

    (For anyone who hasn't read much Freud, here's a very short essay that may surprise you with its rigor: https://www.marxists.org/reference/subject/philosophy/works/...)

  • accidentallfact 6 hours ago
    It's the same kind of schizm that has lead to a lot of hate and mass murder over the last century or so: Abstraction/dimensionality reduction vs. concrete logic and statistics

    Concrete statistician: I can learn the problem in its full complexity, unlike the dumdum below me.

    Abstract thinker: I understand it, because I can reduce its dimensity to a small number of parameters.

    CS: I can predict this because I have statistics about its past behavior.

    AT: I told you so.

    CS: You couldn't possibly know this, because it has never happened before. You suffer from the hindsight bias.

    AT: But I told you.

    CS: It has never happened, you couldn't possibly have statistics of when such things occur. You were just lucky.

    CS: I'm smart, I can be taught anything

    AT: You are stupid because you need to be taught everything.

    War (or another sort of mass death or other kind of suffering) emerges.

    • dcre 6 hours ago
      Incredibly ironic to make this argument using such an abstract and low-dimensional framework.
    • iammjm 6 hours ago
      What are some concrete examples of wars that you believe emerged due to this schism?
      • accidentallfact 6 hours ago
        It seems that a large majority of conflicts did:

        The revolutionary war: CS America vs AT british empire.

        The french revolution: CS revolutionaries vs AT aristocracy.

        American Civil war: CS North, vs AT South

        WWII: AT Nazis, vs CS jews.

        Probably many more.