Superintelligence–10 Years Later

(humanityredefined.com)

47 points | by evilcat1337 3 days ago

9 comments

  • satvikpendem 3 days ago
    I remember reading Bostrom's work in 2014 and raving about it to others while no one really understood what I was so interested in. Well, now everyone is talking about this topic. One of my favorite analogies in the book goes something like, imagine a worm wriggling in the ground, it has no conception of the god-like beings that inhabit the world, in cities, having all sorts of goals, doing all sorts of jobs. It literally does not have the brain power to comprehend what is happening.

    Now imagine we are the worm.

    • jhanschoo 3 days ago
      We already have "superintelligences" in the world; Nature and other humans treated as a collective is far more powerful than any individual. We manage these risks by not trusting them completely, and restricting their dominance over us. I don't see that we can't adapt to superintelligent machines as long as we don't surrender all decision making to them; the risk comes from the same old, where a group possesses overwhelming power that is then used to regiment and oppress a less powerful group. In which case, possession of AI is far from unique.
    • keiferski 3 days ago
      This seems like a poor metaphor, considering that we can understand what constituent things would make up a superintelligence, even if we don’t understand the whole.

      This discussion centers too much on the definitions of words like superintelligent and reminds me a lot of philosophical discussion about omnipotence. Both seem to rely more on defining concepts first and then assuming their existence as a consequence.

      • ben_w 3 days ago
        Omnipotence is provably impossible: "Could Jesus microwave a burrito so hot that he himself could not eat it?" etc.

        Super-intelligence, not so much — there's plenty of examples of above-average humans in many areas, and no reason to think that putting the top expert in each field into one room is impossible, and no reason to think that this configuration cannot be implemented in software with a sufficiently powerful computer.

        And that's without the things that machines already beat us at, because super-human chess playing software is easily available, and computers that do arithmetic better than the entire human species even if we were all trained to the level of the current world record holder are cheap enough to be given away free glued to the front of a magazine, so there's no single person who has a particular advantage with those things.

        What chess does do, is give an example: if I was playing a game with Kasparov, I would have no idea which move he might make at any given moment, but despite that I'd still expect him to win.

        With an AI, I don't even necessarily know what "game" it's playing even if I'm the one who wrote the utility function it's trying to maximise.

      • shinycode 3 days ago
        We understand some of it but who can say we understand the majority of it ? We might be at 0.1% of understanding the reality without be able to state this. Just as a worm surely « understand » some of it to differentiate and process it’s surroundings.
      • satvikpendem 3 days ago
        > considering that we can understand what constituent things would make up a superintelligence

        Can we? What constituent things would make up a superintelligence? Who's to say that our answer to that question is what is actually true in the case of a real superintelligence?

        > Both seem to rely more on defining concepts first and then assuming their existence as a consequence.

        Unlike religious philosophers like Anselm of Canterbury and Descartes and their ontological argument for the existence of a supreme being merely by imagining it, I don't believe anyone in the study of superintelligence is presupposing that they exist, or even can exist, they only presuppose how one might hypothetically exist.

        • keiferski 3 days ago
          Presumably the AI is in charge of things that humans used to manage and therefore understand. Worms don’t understand anything about even a slice of human society, so I don’t think it’s a great metaphor.
          • satvikpendem 3 days ago
            Why does an AI have to handle human affairs at all? It could exist outside of human goals, that would not make it not a superintelligence, just as we don't really care about worms.
            • keiferski 3 days ago
              I guess it doesn’t, but at this point what exactly are we speculating about? Because it seems like imaginary sci-fi, dependent on the definition of superintelligence and not on any real world developments.

              It seems much more realistic to me that AI will be running systems that humans used to run, and therefore will understand at some level.

              • satvikpendem 3 days ago
                Well that is what the book is about, it is a speculative look at what hypothetical superintelligences might look like, it is explicitly not about the real world at all. Remember that the author is a philosopher, not an engineer, philosophy is all about hypotheticals.
                • keiferski 3 days ago
                  Philosophy is not all about hypotheticals. Philosophy of technology especially is mostly about technologies that already exist and their impact on society. Not speculation.
                  • satvikpendem 3 days ago
                    Some philosophy (of technology, among others) relates to that, not all. It is not necessarily all about concrete impacts either, it depends on the author and their interests.
      • shrimp_emoji 3 days ago
        I always thought about it like being a permanent infant. The world is huge, full of colorful things you don't understand, and it'll be like that forever. But it's also a poor metaphor because adults have something that toddlers don't: fear. As a kid, you're an ignorant and curious blank slate; as an adult, you've established expectations and anxieties, so you'd probably be having a much worse time. :D
    • ganzuul 3 days ago
      Humans use their multiply redundant brain power to align with absurd goals. We are simply hobbled by non-Star Trek culture.
    • john_minsk 3 days ago
      Interesting. Could you share what are you interested in at the moment?
      • satvikpendem 3 days ago
        In intellectual terms, I'm currently interested in the fusion of Asian and Western history, reading James Clavell's Asian Saga now, after watching Shogun recently. David Graeber's books are also on my list once I finish the Saga. I've read Bullshit Jobs and Debt by him but I've heard good things about The Dawn of Everything, particularly how European Enlightenment ideas might have actually been influenced by what they saw from Native Americans.

        In terms of projects I'm working on, I'm traveling currently and it's a pain to track how much money I've spent due to needing to convert foreign currencies, so I'm building a simple app for that.

    • exe34 3 days ago
      I like to imagine bacteria as the compute substrate for an immaterial city of digital inhabitants. Fungus are even cooler, with the hypothetical wood wide web. maybe we already are the worms!
  • zxcb1 3 days ago
    Norbert Wiener was ahead of his time in recognizing the potential danger of emergent intelligent machines. I believe he was even further ahead in recognizing that the first artificial intelligences had already begun to emerge. He was correct in identifying the corporations and bureaus that he called "machines of flesh and blood" as the first intelligent machines.

    https://en.wikipedia.org/wiki/Possible_Minds

  • throwerofstone 3 days ago
    The author states that AI safety is very important, that many experts think it is very important and that even governments consider it to be very important, but there is no mention of why it is important or what "safe" AI even looks like. Am I that out of the loop that what this concept entails is so obvious that it doesn't require an explanation, or am I overlooking something here?
    • hiAndrewQuinn 3 days ago
      The idea that most AIs are unsafe to non-AI interests is foundational to the field and typically called instrumental convergence [1]. You can also look up the term "paperclip maximizer" to find some concrete examples of what people fear.

      [1]: https://en.m.wikipedia.org/wiki/Instrumental_convergence

      It's unfortunately hard to describe what a safe AI would look like, although many have tried. Similar to mathematics, knowing what the correct equation looks like is a huge advantage in building the proof needed to arrive at it, so this has never bothered me much.

      You can see echoes of instrumental convergence in your everyday life if you look hard enough. Most of us have wildly varying goals, but for most of those goals, money is a useful way to achieve them -- at least up to a point. That's convergence. An AI would probably get a lot farther by making a lot of money too, no matter what the goal is.

      Where this metaphor breaks down is we human beings often arrive at a natural satiety point with chasing our goals: We can't just surf all day, we eventually want to sleep or eat or go paddle boarding instead. A surfing AI would have no such limiters, and might do such catastrophic things as use its vast wealth to redirect the world's energy supplies to create the biggest Kahuna waves possible to max out its arbitrarily assigned SurfScore.

      • robertlagrant 3 days ago
        I couldn't find concrete examples that weren't actually of AI with godlike powers.
        • ben_w 3 days ago
          What do you mean by "godlike powers"?

          We flatten mountains to get at the rocks under them. We fly far above the clouds to reach our holiday destinations.

          We have in our pockets devices made from metal purified out of sand, lightly poisoned, covered in arcane glyphs that so small they can never be seen by our eyes and so numerous that you would die of old age before being able to count them all, which are used to signal across the world in the blink of an eye, used to search through libraries grander than any from the time when Zeus was worshiped, and used to invent new images and words from prompts alone.

          We power our homes with condensed sunlight and wind, and with the primordial energies bound into rocks and tides; and we have put new πλανῆται (planētai, "wandering" star) in the heavens to do the job of the god Mercurius better than he ever could in any myth or legend. And those homes themselves are made from νέος λίθος ("neolithic", new rock).

          We've seen the moon from the far side, both in person and by גּוֹלֶם (golem, for what else are our mechanised servants?); and likewise to the bottom of the ocean, deep enough that スサノオ (Susanoo, god of sea and storms) could not cast harm our way; we have passed the need for prayer to Τηθύς (Tethys) for fresh water as we can purify the oceans; and Ἄρης (Ares) would tremble before us as we have made individual weapons powered by the same process that gives the sun its light and warmth that can devastate areas larger than some of the entire kingdoms of old.

          By the same means do our homes, our pockets, have within them small works of artifice that act as húsvættir (house spirits) that bring us light and music whenever we simply ask for them, and stop when we ask them to stop.

          We've cured (some forms of) blindness, deafness, lameness; we have cured leprosy and the plague; we can take someone's heart out and put a new one in without them dying; we have scanners which look inside the body without the need to cut, and some which can even give a rough idea of what images the subjects are imagining.

          "We are close to gods, and on the far side", as Banks put it.

    • krisoft 3 days ago
      The article itself is talking about a specific book. "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. That book is the seminal work on the subject of AI safety. If you honestly want answers to your questions I recommend reading it. It is written in a very accessible way.

      If reading a whole book is out of question then I'm sure you can find many abridged versions of it. In fact the article itself provides some pointers at the very end of it.

      > Am I that out of the loop

      Maybe? Kinda? That's the point of the article. There has been 10 years since the publication of the book. During that time the topic went from the weird interest of some Oxford philosopher to a mainstream topic discussed widely. 10 years is both a long time and a blink of an eye. Depending on your frame of reference. But it is never too late to get in the loop if you want to.

      At the same time I don't think it is fair to expect from every article ever to rehash the basic concepts of the field they are working on.

      • janalsncm 3 days ago
        > It is written in a very accessible way

        Many have expressed my sentiments far better than I can, but Superintelligence is quite frankly written in a very tedious way. He says in around 300 pages what should have been an essay.

        I also found some of his arguments laughably bad. He mentions that AI might create a world of a handful of trillionaires, but doesn’t seem to see this extreme inequality as an issue or existential threat in and of itself.

        • satvikpendem 3 days ago
          He did write an essay [0]. Because it was very short and not deeply insightful due to such length, he wrote a longer book talking about the concepts.

          [0] https://nickbostrom.com/views/superintelligence.pdf

        • ben_w 3 days ago
          > He mentions that AI might create a world of a handful of trillionaires, but doesn’t seem to see this extreme inequality as an issue or existential threat in and of itself.

          I've not read the book, so I don't know the full scope of that statement.

          In isolation, that's not a big issue and not an existential threat, as it depends on the details.

          For example, a handful of trillionaires where everyone else is "merely" as rich as Elon Musk isn't a major inequality, it's one where everyone's mid-life crisis looks e.g. like whichever sci-fi spaceship or fantasy castle they remember fondly from childhood.

          • est31 3 days ago
            Haven't read the book either, but a handful of trillionaires could be that the "upper 10 000" oligarchs of the USA get to be those trillionaires, and everyone else starves to death or simply can't afford to have children and a few decades later dies from old age.

            Right now, in order to grow and thrive, economies need educated people to run it, and in order to get people educated you need to give them some level of wealth to have their lower level needs met.

            It's a win-win situation. Poor/starving people go to arms more quickly and destabilize economies. Educated people are the engineers, doctors and nurses. But once human labour isn't needed any more, there is no need for those people any more either.

            So AI allows you to deal with poor people much better now than in the past: an AI army helps to prevent revolutions and AI engineers, doctors, mechanics, etc, eliminate the need for educated people.

            There is the economic effect that consumption drives economic growth, which is a real effect that has powered the industrial revolution and given wealth to some of today's rich people. Of course, a landlord has the incentive for people to live in his house, that's what gives it value. Same goes for a farmer, he wants people to eat his food.

            But there is already a certain chunk of the economy which only caters to the super rich, say the yacht construction industry. If this chunk keeps on growing while the 99% get less and less purchasing power, and the rich eventually transition their assets into that industry, they get less and less incentives to keep the bottom 99% fed/around.

            I'm not saying this is going to happen, but it's entirely possible to happen. It's also possible that every individual human will be incredibly wealthy compared to today (in many ways, the millions in the middle classes in the west today live better than kings a thousand years ago).

            In the end, it will depend on human decisions which kinds of post-AI societies we will be building.

            • ben_w 3 days ago
              Indeed, I was only giving the "it can be fine" example to illustrate an alternative to "it must be bad".

              As it happens, I am rather concerned about how we get from here to there, as in the middle there's likely a point where we have some AI that's human-level at ability, which needs 1 kW to do in 1 hour what a human would do in 1 hour, and at current electricity prices that's something humans have to go down to the UN abject poverty threshold to be cost-competitive with while simultaneously being four times the current global per-capita electricity supply which would drive up prices until some balance was reached.

              But that balance point is in the form of electricity being much more expensive, and a lot of people no longer being able to afford to use it at all.

              It's the traditional (not current) left vs. right split — rising tides lifting all boats vs. boats being the status symbol to prove you're an elite and letting the rest drown — we may get well-off people who task their robots and AI to make more so the poor can be well-off, or we may have exactly as you describe.

        • krisoft 3 days ago
          > frankly written in a very tedious way.

          Ok? I don't see the contradiction. When I say "It is written in a very accessible way" I mean to say "you will understand it". Even if you don't have years of philosophy education. Which is sadly not a given in this day and age. "frankly written in a very tedious way" seems to be talking about how much fun you will have while reading it. That is an orthogonal concern.

          > He says in around 300 pages what should have been an essay.

          Looking forward to your essay.

          > I also found some of his arguments laughably bad.

          Didn't say that I agree with everything written in it. But if you want to understand what the heck people mean by AI safety, and why they think it is important then it has the answers.

          > He mentions that AI might create a world of a handful of trillionaires, but doesn’t seem to see this extreme inequality as an issue or existential threat in and of itself.

          So wait. Is your problem that the argument is bad, or that it doesn't cover everything? I'm sure your essay will do a better job.

    • sanxiyn 3 days ago
      AI is safe if it does not cause extinction of humanity. Then it is self-evident why it is important.

      The article does link to "Statement on AI Risk", at https://www.safe.ai/work/statement-on-ai-risk

      It is very short, so here is full quote.

      > Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

      • krisoft 3 days ago
        > AI is safe if it does not cause extinction of humanity.

        I don't think that is true. "AI is not safe if it cause extinction of humanity." is more likely to be true. But that is a necessary requirement but not sufficient.

        Just think of a counter example: An AI system which wages war on humanity, wins and then keeps a stable breeding population of humans in abject suffering in a zoo like exhibit. This hypothetical AI did not cause extinction of humanity. Would you consider it safe? I would not.

        • hiAndrewQuinn 3 days ago
          That's called "s-risk" (suffering risk). Some people in the space do indeed take it much more seriously than "x-risk" (extinction risk).

          If you are deeply morally concerned about this, and consider it likely, then you might want to consider getting to work on building an AI which merely causes extinction, ASAP, before we reinvent that one sci-fi novel.

          Personally, I see no particular reason to think this is a very likely outcome. The AI probably doesn't hate us - we're just made out of joules it can use better elsewhere. x-risk seems much more justified to me as a concern.

          • krisoft 3 days ago
            > The AI probably doesn't hate us

            The AI doesn't have to hate us for this outcome. In fact it might be done to cocoon and "protect" us. It just has different idea from us what needs to be protected and how. Or alternatively it can serve (perfectly or in a faulty way) the aims of its masters. A few lords reigning over suffering masses.

            > If you are deeply morally concerned about this, and consider it likely, then you might want to consider getting to work on building an AI which merely causes extinction, ASAP, before we reinvent that one sci-fi novel.

            What a weird response. Like one can't be concerned about two ( (or more!) things simultaneously? Talk about "Cutting off one's nose to spite one's face"

            • ben_w 3 days ago
              The quote I've heard is: 'The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else': https://www.amazon.de/-/en/Tom-Chivers/dp/1474608787 (another book I've not read).

              > Or alternatively it can serve (perfectly or in a faulty way) the aims of its masters.

              Our state of knowledge is so bad that being able to do that would be an improvement.

            • the8472 3 days ago
              The argument is that "humans live, but suffer" is a smaller outcome domain and thus less likely to be hit than an outcome incompatible with human life. Because at that point, getting something to care about humans at all, you've already succeeded with 99% of the alignment task and only failed at the last 1% of making it care in a way we'd prefer. If it were obvious that rough alignment is easy but the last few bits of precision or accuracy are hard that'd be different.

              I fail to see a broad set of paths that end up with a totally unaligned AGIs and yet humans live but in a miserable state.

              Of course we can always imagine some "movie plot" scenarios that happen to get some low-probability outcome by mere chance. But that's focusing one's worry on winning an anti-lottery rather than allocating resources to the more common failure modes.

              • krisoft 3 days ago
                > already succeeded with 99% of the alignment task and only failed at the last 1% of making it care in a way we'd prefer.

                Who is we? Humanity does not think with one unified head. I'm talking about a scenario where someone makes the AI which serves their goals, but in doing so harms others.

                AGI won't just happen on its own. Someone builds it. That someone has some goals in mind (they want to be rich, they want to protect themselves from their enemies, whatever). They will fiddle with it until they think the AGI shares those goals. If they think they didn't manage to do it they will strangle the AGI in its cradle and retry. This can go terribly wrong and kill us all (x-risk). Or it can succeed where the people making the AGI aligned it with their goals. The jump you are making is to assume that if the people making the AGI aligned it with their goals that AGI will also align with all of humanity's goals. I don't see why that would be the case.

                You are saying that doing one is 99% of the work and the rest is 1%. Why do you think so?

                > Of course we can always imagine some "movie plot" scenarios that happen to get some low-probability outcome by mere chance.

                Definitions are not based on probabilities. sanxiyn wrote "AI is safe if it does not cause extinction of humanity." To show my disagreement I described a scenairo where the condition is true (that is the AI does not cause extinction of humanity), but I would not describe as "safe AI". I do not have to show that this scenario is likely to show the issue with the statement. Merely that it is possible.

                > focusing one's worry on winning an anti-lottery rather than allocating resources to the more common failure modes.

                You state that one is more common without arguing why. Stuff which "plainly doesn't work and harmful for everybody" is discontinued. Stuff which "kinda works and makes the owners/creators happy but has side effects on others" is the norm, not the exception.

                Just think of the currently existing superinteligences: corporations. They make their owners fabulously rich and well protected, while they corrupt and endanger the society around them in various ways. Just look at all the wealth oil companies accumulated for a few while unintentionally geo-engineering the planet and systematically suppressing knowledge about climate change. That's not a movie plot. That's the reality you live in. Why do you think AGI will be different?

                • ben_w 3 days ago
                  > You are saying that doing one is 99% of the work and the rest is 1%. Why do you think so?

                  (Different person)

                  I think it's much starker than that, more even than 99.99% to 0.01%; the reason is the curse of high dimensionality.

                  If you imagine a circle, there's a lot of ways to point an arrow that's more than 1.8° away from the x-axis.

                  If you imagine a sphere, there's even more ways to point an arrow that's more than 1.8° away from the x-axis.

                  It gets worse the more dimensions you have, and there's a lot more than two axies of human values; even at a very basic level I can go "oxygen, food, light, heat", and that's living at the level of a battery farmed chicken.

                  Right now, we don't really know how to specify goals for a super-human optimiser well enough to even be sure we'd get all four of those things.

                  Some future Stalin or future Jim Jones might try to make an AGI, "strangle the AGI in its cradle and retry" because they notice it's got one or more of those four wrong, and then finally release an AI that just doesn't care at all about the level of Bis(trifluoromethyl)peroxide in the air, and this future villain don't even know that this is bad for the same reason I just got that name from the Wikipedia "List of highly toxic gases" (because it is not common knowledge): https://en.wikipedia.org/wiki/List_of_highly_toxic_gases

      • ekianjo 3 days ago
        or it could be a elaborate ruse to keep power very concentrated.
    • jl6 3 days ago
      It’s not a technical term. The dictionary definition of safety is what they mean. They don’t want to create an AI that causes dangerous outcomes.

      Whether this concept is actionable is another matter.

    • exe34 3 days ago
      AI is unsafe if it doesn't answer to the board of directors or parliament. Also paperclip maximizers, as opposed to optimizing for gdp.
      • Rhapso 3 days ago
        Yeah, the constant dissonance with AI safety is that every single AI safety problem is already a problem with large corporations not having incentives aligned with the good of people in general. Profit is just another paperclip.
  • Borrible 3 days ago
    Perfect is the enemy of good, so why vote for a lesser good?

    Humans are so existentially biased and self-centred!

    And they are always forgetting that they wouldn't even be there if others hadn't made room for them.From the Great Oxygen to the K–Pg extinction event.

    Be generous!

    "Man is something that shall be overcome. What have you done to overcome him?"

    Friedrich Nietzsche

    • shrimp_emoji 3 days ago
      It's going to happen biologically before it happens in silicon, anyway. And the biological venue could very well be humans (genetically modified). So I quite literally agree. :)
  • CuriouslyC 3 days ago
    I love how people think because we are getting very good at efficiently encoding human intelligence that implies that we are very close to creating superintelligence, and that our progress on creating superintelligence will somehow resemble the rate of progress on the simpler problem of encoding existing intelligence.
    • Dzugaru 3 days ago
      If we can create a human-level intelligence in the computer - it would already be superintelligence. No human on Earth is capable of reading and remembering Internet scale corpus of data, or doing math at GHz speeds, etc.
    • in3d 3 days ago
      If we can match our existing intelligence (but it’s a jagged border of capabilities), our progress in creating superintelligence won’t matter because we won’t be the ones making it.
  • n4r9 3 days ago
    The author claims that we are "between third and fifth point" in the following list:

    >i Safety alarmists are proved wrong

    >ii Clear relationship between AI intelligence and safety/reliability

    >iii Large and growing industries with vested interests in robotics and machine intelligence.

    >iv A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research.

    >v The enactment of some safety rituals, whatever helps demonstrate that the participants are ethical and responsible (but nothing that significantly impedes the forward charge).

    >vi A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment.

    Have we really gone past the first point? After decades of R&D, driverless cars are still not as safe as humans in all conditions. We have yet to see the impact of generative AI on the intellectual development of software engineers, or to what extent it will exacerbate the "enshittification" of software. There's compelling evidence that nation states are trusting AI to identify "likely" terrorists who are then indiscriminately bombed.

    • zamadatix 3 days ago
      The abridged summary here elides that 1 is a history of claims of intolerable harm being proved wrong, not that every claim has already been proved wrong. In this frame that too many people kept raising alarms equivalent to "cars with driving assistance will cause a bloodbath" which then come to pass, not that there are no further safety alarmist claims left about what could be coming next as the technology changes.

      Keeping it focused on AI every release of a text, image, and voice generator has come with PR, delays, news articles, and discussion about how it's dangerous and we need to hold it back. 3 months after they release politics hasn't collapsed from a 10 fold increase in fake news, discussion boards online are still as (un)usable as they were before, art is still a thing people do, and so on. That doesn't mean there are no valid safety concerns just that the alarmist track record isn't particularly compelling to most while the value of the tools continues to grow.

    • rolandog 3 days ago
      > Have we really gone past the first point?

      I think it will always depend on who you ask, and if they're arguing in bad faith:

      "Sure, the sentry bot can mistakenly shoot and kill its own owner and/or family, but only if they're carrying a stapler. Like, who even uses a stapler in this day and age?"

  • navane 3 days ago
    AI safety is fear mongering to shut up the Luddites

    The AI we have now (Stable Diffusion, chatgpt) are technical advancements that allow inferior but cheaper production of artistic content. It is not a step closer to death-by-paperclips; it is merely another step of big capital automating production, hoarding more wealth in a smaller group.

    The closer thing to AI safety is unsupervised execution of laws by ML.

  • bluetomcat 3 days ago
    This is one of the most delusional and speculative books I've ever read. The author comes up with elaborate analytical models resting on slippery, loosely-defined terms. Being smart with algebra while totally disconnected from technological grounds. It's the kind of stuff VP execs and Bill Gates like to read, and one of the reasons for the current bubble.
    • hnbad 3 days ago
      The problem starts with talking about "AGI" and LLMs/GenAI in the same breath. LLMs are not and can not be AGI. They are impressive, but they are glorified autocomplete. When ChatGPT lets you "correct" it, it doesn't backtrack, it takes your response into consideration along with what it said before and generates what its model suggests could come next in the conversation. It's more similar to a Markov chain than to an expert system.
    • moffkalast 3 days ago
      I've re-skimmed it recently as well, and found it to be extremely zeerusted and needlessly alarmist in retrospect. A lot of it is written from the perspective of "a handful of scientists build brain in a bunker a la Manhattan project" that is so far from our actual reality that 90% of the concerns don't even apply.

      Exponential runaway turned out to not be a thing at all, progress is slow (on the order of years), competitors are aplenty, alignment is easy, everything is more or less done in the open with papers being published every day. We're basically living out the absolute best possible option out of all the ones outlined in the book.

      • hnbad 3 days ago
        Looks like the real-world risks of AI are, predictably, AI being used to avoid responsibility/liability/regulation or plainly copyright-laundering (which likewise predictably is only a temporary loophole until laws catch up) and companies like Google reversing all progress they made in reducing their emissions by doubling down on resource-intense AI.

        "Avoiding regulation" as a Service of course has a huge market potential for as long as it works, just like it did for crypto and the gig economy. But it is by definition a bubble because it will deflate as soon as the regulations are fixed. GenAI might have an eventual use but it will in all likelihood look nothing like what it is used for at the moment.

        And yeah, you could complain that what I said mostly applies to GenAI and LLMs but that's where the current hype is. Nobody talks about expert systems because they've been around for decades and simply work while being very limited and "unsexy" because they don't claim to give us AGI.

  • hiAndrewQuinn 3 days ago
    [flagged]
    • Cyphase 3 days ago
      For anyone else who had the same initial thought, this isn't talking about bounties as in hits; it's basically talking about giving rewards for exposing violations of laws, funded by non-government parties so those parties can have higher confidence that violations aren't being ignored.

      In other words, a privately-funded Crime Stoppers for the crimes of AI researchers.

    • QuesnayJr 3 days ago
      I propose bounties on people who propose putting bounties on AI researchers.