AI (2014)

(blog.samaltman.com)

56 points | by bjornroberg 2 hours ago

8 comments

  • Jensson 1 hour ago
    > And maybe we don't want to build machines that are concious in this sense. The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.

    This is where LLM is currently going. Not really AGI since they can't think like humans, but they can do a lot of things and humans can train them on novel things.

    Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.

    • oytis 1 hour ago
      That's an expression of class thinking from the beginning IMO. People think of themselves as thinkers and creators, while those who do labour they rely on without getting too much into details are merely doers and can ideally be replaced. But it's really thinking and creativity all the way down if you try to learn to do things well
      • gjadi 1 hour ago
        “The doers are the major thinkers. The people that really create the things that change this industry are both the thinker and doer in one person.”

        Steve Jobs

        Now, what are doers in the age of LLM is another question.

        • jack_pp 1 hour ago
          Well was Jobs a "doer"? Did he get his hands dirty on the code? Or did he use his employees how we would like to use LLMs?
          • aleph_minus_one 1 hour ago
            > Well was Jobs a "doer"?

            Jobs' talent was that he was an incredibly talented salesman.

            • philipallstar 1 hour ago
              That wasn't too hard for him given he was also an incredibly talented market opportunity spotter and product leader.
            • ugtr3 58 minutes ago
              Why do people write such nonsense?

              Jobs envisioned the iPad and iPhone. Did he do the physical work? No. But he created direction.

              Everyone around him at that time has commented on this. Are you going to claim they’re all lying?

      • cheschire 1 hour ago
        You must have had limited exposure to uncreative types. You might be shocked to find there are people that can do nothing more than follow checklists.

        Sometimes it's a lack of capacity for novel thinking. Sometimes it's fear caused by past trauma. Or it can be age. Or an inability to overcome habits. The list goes on, but the point is that I've had to work with or supervise employees (even in IT!) that didn't have a creative bone in their body. It wasn't a lack of motivation, it was usually something on the list above.

        These people absolutely deserved the feeling of being useful, and those are the people I'm most concerned for in this new post-LLM world. The creative types will most likely be fine, but we have words to describe creativity as an acknowledgement that there can be an absence of creativity.

        • mmustapic 1 hour ago
          You are only thinking about people and creativity in the workplace. Creativity can be applied anywhere: cooking, a new route on your way to somewhere, read some random paragraphs in a book that spawns new thoughts, a new game with a child, optimize the way you paint the walls on your house, choose the plants in your garden (and how you'll water them), do a doodle, try or buy a new outfit, typing this paragraph in response to your message (kinda LLM-y maybe).
          • jack_pp 43 minutes ago
            Sure and all the same, most people just don't have it.
      • Jensson 1 hour ago
        > But it's really thinking and creativity all the way down if you try to learn to do things well

        Yes, everyone starts out creative.

        But we all can tell the difference between a worker that is still creative and learning and a worker that gave up creativity and is just doing his job. The first will still be useful in this AI age the second will be replaced by AI learning what he already knows.

        • aleph_minus_one 1 hour ago
          > But we all can tell the difference between a worker that is still creative and learning and a worker that gave up creativity and is just doing his job. The first will still be useful in this AI age the second will be replaced by AI learning what he already knows.

          Rather: the workers who are (still) creative are typically a huge annoyance to their bosses.

          • Jensson 1 hour ago
            Yeah, and that is how people stop being creative as they get punished for it while their uncreative peers gets praised. It happens to most people in school or early in their career, few gets to keep their creativity.

            In a new world where creativity is valued higher more people could probably keep their creativity.

            • aleph_minus_one 59 minutes ago
              > In a new world where creativity is valued higher

              This is in my opinion a very dubious assumption. :-(

        • jack_pp 41 minutes ago
          > Yes, everyone starts out creative.

          Are there studies done on this or is this just wishful thinking?

          • Jensson 37 minutes ago
            I have never met an uncreative kid, and studies show kids tend to be more open and creative. But I have to admit I haven't met and interacted with that many average kids, so there maybe some that aren't creative, but a majority are.
    • lebek 1 hour ago
      > Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.

      But it's not fun to be figuring out new things all the time. Some amount of routine work is necessary to 1) exercise mastery (feels good), and 2) recover energy. This is why a lot of people find agentic coding exhausting and less fun, you're basically always having to be creative (what's the next feature?) or solve the hardest 5% of issues the LLM can't handle.

      • embedding-shape 1 hour ago
        > you're basically always having to be creative (what's the next feature?) or solve the hardest 5% of issues the LLM can't handle.

        Maybe I'm wired differently, but this is fun to me, and "exercising mastery" by doing routine work is almost never fun, things stop being fun and feel good once I've "mastered" them, and I can't say I've ever "recovered energy" by doing routine work, it seems to suck energy out of me faster than anything. To recover, I tend to rest and do anything but work. But again, maybe it's just weird wiring.

        • __s 1 hour ago
          For me a bit of grunt work to start the day is like morning strechs
          • embedding-shape 1 hour ago
            Forcing myself to do something like that would be a great way to ruin my day ;)
    • mofeien 1 hour ago
      > > The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking.

      > This is where LLM is currently going.

      This is not where LLMs are currently going. They are trained and benchmarked explicitly in all areas that humans produce economically and cognitively valuable work: STEM fields, computer use, robotics, etc.

      Systems are already emerging where AI agents autonomously orchestrate subagents which again all work towards a goal autonomously and only from time to time communicate with you to give you status updates.

      Thinking that you as a slow human will be needed for much longer to fill some crucial role in this AI system that it cannot solve by itself, and to bring some crucial skill of creativity or thinking to the table that it cannot generate itself is just wishful thinking. And to me personally, telling an AI to "do cool thing X" without having made any contribution to it beyond the initial prompt also feels very depressing and seems like much less fun than actually feeling valued in what I do. I'm sorry for sounding harsh.

      • ugtr3 54 minutes ago
        lol what a load of gibberish.
    • dannersy 1 hour ago
      I see a lot less thinking as a result of using LLMs as they are today and I don't see the providers building tools to promote a better way to use them. They are still way too sycophantic.
    • plaidfuji 52 minutes ago
      Came here to quote the same sentence, but say the exact opposite - it seems to me that today’s LLMs are progressing far faster on the “thinking” front than the “doing”.

      I suppose it depends on your definition of “doing” - if it’s “writing code”, then sure. But there’s a whole world of actual, physical “doing” that AI is nowhere close to matching humans at, and it’s much easier for me to envision a world where AI replaces the management / “thinking” layer of society than the physical labor. Which is scary, because it’s the opposite of his (and I would assume most people’s) ideal.

    • dgxyz 1 hour ago
      I don't think that's where it's going.

      LLMs are shit at doing stuff to anyone who is a domain expert in the thing that they are supposed to be doing. They are trained on a huge corpus of average stuff. They produce average to crappy solutions quickly. The technology industry bubble is trained to accept that as good enough which is why everyone is excited. Elsewhere it's a complete and utter joke.

      And on top of that, a huge chunk of doing requires humans to physically do something or absolute determinism is better anyway, neither of which an LLM is capable of.

      None of it makes sense.

      Edit: actually the technology industry moves the goalposts to match the claims. That is the dishonest bit. I've not seen any evidence of novel capability which isn't corrupted by some dishonest measurement approach.

    • croes 1 hour ago
      The problem is most people’s job depends on doing the work part and not the figuring out the new things part.

      So you just lose your job.

  • Alan_Writer 36 minutes ago
    "If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine."

    Even if AI can't reach (yet) the creativity level, it performs well while trying, at least for now. Who knows in the near future? So far, the roadmap is clear.

    The AI push is causing major layoffs in the tech and crypto industries nowadays. But we have been receiving the message "adapt or pay the consequences." Right now, even management positions are being replaced by software. It could sound rude, but it's also part of human nature and evolution. We have created these machines, and now we have to deal with them.

    On the other hand, it could be rare at these stages, but we (regular human beings) barely know how the brain really works. And AI has demonstrated, at some point, that it can work very well in some roles (mostly operational, ofc), but it's also turning indispensable. Even governments like the Abu Dhabi one are pushing to rule the emirate fully by AI.

    So yeah, even if we don't like it, AI is silently replacing humans. The best you can do is to learn how to leverage and not be left behind.

  • jryio 1 hour ago
    > The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.

    Man will do nothing and machine will do everything. That's a bleak world no one is preparing for.

    How is that universal basic income scheme coming along?

    • elevatortrim 1 hour ago
      That world is not necessarily bleak.

      We currently have two broad mechanisms to equate people's value.

      *Employees:*

      Easy to replace = Low Salary = Gets Few Resources

      Hard to replace = High Salary = Gets Many Resources

      *Entrepreneurs:*

      Output consumed low = Low Pay = Gets Few Resources

      Output consumed high = High Pay = Gets Many Resources

      (Resource consumption ignored)

      In a world where machines do everything, aspects of these change:

      *Employees:*

      Easy to replace = Gets whatever resources

      (no-one hard to replace)

      It is up to us to define whether 'whatever' is bleak or not. If we decide that resources need to be shared fairly, it could be heaven, not hell.

      *Entrepreneurs:*

      Resource consumption: Whatever

      It is up to us how much resource consumption we allow. If we decide that resource consumption need to be sustainable, it could be heaven, not hell.

      • Loughla 1 hour ago
        Lol this does not fill me with hope.

        If there is person A who can become a squillionnaire by making sure that the employees of a company make as little as possible due to AI, that's what's going to happen. There is zero way "we" will decide resources need to be shared fairly.

        If person A can amass more money and power, then resource consumption literally doesn't matter. There is no way "we" will be involved in that process at all.

        Call me cynical, but it appears that human history has proven over and over and over again that whatever the short sighted, selfish option that enriches a very few is, is what will happen, until there is finally violence.

        I do not look forward to the AI wars that my children will be forced to fight in.

      • virgildotcodes 1 hour ago
        I don't see how this doesn't equally apply to the pre-AI economy. The results there have been quite stark, with the "entrepreneurs" ending up far better off than the "employees".
        • Jensson 1 hour ago
          > I don't see how this doesn't equally apply to the pre-AI economy. The results there have been quite stark, with the "entrepreneurs" ending up far better off than the "employees".

          This is wrong, in most cases the entrepreneur is worse off than the employees, since the entrepreneur spent all his savings on the projects and the employees walks away with all the money they got from their salaries.

          And even when it is fully funded by external investors most of the time the founder just gets to keep the salary since the company fails and become worthless.

          The only time the entrepreneur is better off is when the company succeeds and becomes big, but that is rare, most of the time it is better to be an employee.

          • ugtr3 49 minutes ago
            It depends on risk preferences.

            Risk seekers should be entrepreneurs.

            Risk averse people, probably, should not.

    • climike 1 hour ago
      Resource allocation based on your hackernews upvotes? Thanks in advance folks ;)
    • Devasta 1 hour ago
      > How is that universal basic income scheme coming along?

      If the Epstein class won't allow for everyone to have a reasonable standard of living when they relied on workers to produce, the chances of them allowing it when they don't is next to nil. They couldn't even bear the thought of people working from home, for no other reason than the workers liked it, and that cost them nothing.

  • nik736 1 hour ago
    > (I originally was going to say a computer that plays chess, but computers play chess with no intuition or instinct--they just search a gigantic solution space very quickly.)

    Isn't that how LLM models are trained right now? Trying to predict the next word within a "gigantic solution space". Interesting.

    • ben_w 1 hour ago
      In one sense, all intelligence is a search in a gigantic solution space.

      But the difference is:

      What Deep Blue did was (if the Wikipedia page is correct) Alpha-beta pruning[0], where some humans came up with the function for what "better" and "worse" board states look like.

      And what LLMs do (at least the end models) includes at least some steps where there's an AI trying to learn what human preferences are in the first place, in order to maximise the human evaluation scores. Some of those things are good, like "what's the right answer to the trolley problem?" and "which is the better poem?", but some are bad such as "what answer best flatters the ego of the user without any regard for truth?"

      The former is exactly like route-finding, in that you could treat travel time as your score of better-worse and the moves as if they're on a map rather than a chess board.

      The latter is like being dumped into a new video game with no UI and all NPCs interact with you only in a language you don't know such as North Sentinelese.

      [0] https://en.wikipedia.org/wiki/Alpha–beta_pruning

    • yobbo 1 hour ago
      > Isn't that how LLM models are trained right now

      It's neither how computer chess works or how LLMs are trained.

      Computer chess uses various tricks to prune the search space of board states, where the search is guided by the "value" of each board state. Neural networks can be used (and probably was at the time) to approximate this value, but there can be hand coded algorithms with learned statistics or even lookup tables for smaller games than chess.

      There's no search in LLM training.

    • Lionga 1 hour ago
      Which even shows Sam has no idea about AI, as the best chess engine at that point in time Komodo 8 was trained and developed primarily through the efforts of GM Larry Kaufman and Mark Lefler, focusing on refining the engine's evaluation function and search accuracy rather than relying on deep, brute-force calculation.

      The reference to pong makes even less sense.

  • drcongo 1 hour ago
    Wait, so his keyboard has got a shift key?!
    • alexyoung 1 hour ago
      Well he is called Sam _Alt_man, not Sam Shiftman
      • bjornroberg 1 hour ago
        Actually laughed out loud on this one. I don't know what that says about me.
    • throwaway132448 1 hour ago
      Whatever happened to that key is a key part of his origin story, that I’m sure will be revealed in due course.
    • Jensson 1 hour ago
      This was written before you have to add in errors in your text so people can tell you aren't an LLM.
  • DeathArrow 1 hour ago
    In a sane world AI revolution would be driven by the likes of Andrew Ng, Andrej Karpathy, Yann LeCun and not by a brigade of Sam Altmans.
    • kolinko 1 hour ago
      As someone who spent most of my time with computer scientists - the last thing I’d like would be for them to run the world.
      • ugtr3 1 hour ago
        Yup. Unhinged to put it mildly.

        People who are liberal artsy at the core but do computer science? Yes.

    • jstummbillig 43 minutes ago
      Is the novel idea behind this recurring critique that a CEO must be the chief scientist or that we uniquely hate Sam Altman over all other CEOs?
  • mpalmer 1 hour ago

        The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking
    
    Steve Yegge said on some podcast recently that AI is going to have to come up with a more visual medium for communicating, because people don't want to read several paragraphs. He shared this uncritically, seemingly without judgement or disappointment. Yegge himself is a former Googler and by all accounts was an impressive person at one point, now best known as the person who vibe-birthed the inanity that is GasTown.

    At work I'm seeing colleagues I once considered formidable completely turning off their brains and letting the bot drive, and wholly missing the mark on work quality. It's like a sickness, like COVID brain fog people don't even notice they have.

    I see humans getting worse at reading, worse at writing, and worse at programming by themselves. It makes me angry and sad.

    We are getting dumber, people, and I fully believe Altman and friends are lying when they say they want it otherwise.

    • ugtr3 1 hour ago
      Correct.

      LLM’s are the virus of the mind - people think so what? I get my output and move on.

      Yeah.. no. You need that thinking capacity to protect yourself. Once that’s gone en-masse, what’s left of the democratic system (not much) will completely collapse. Congrats to legally creating an environment that yields oligarchy.

      Altman and his cronies yearn for a swath of people who cannnot think for themselves.

  • trilogic 1 hour ago
    Nailed it 12 Years ago... damn it, then after all Sam is not just talk and money. I just got humbled. This make me reconsider all my POV about Sam Altman.
    • 9dev 1 hour ago
      Yeah, maybe don't. He's a smart guy for sure, but that really doesn't redeem him from the awful qualities he undoubtedly has—insatiable greed, a compulsion to lie and manipulate, a special flavour of god complex, no moral compass at all, and more.
      • embedding-shape 1 hour ago
        > insatiable greed, a compulsion to lie and manipulate, a special flavour of god complex, no moral compass at all, and more

        Besides these traits that every CEO/big-time investor seems to share, is there anything uniquely awful with Altman?

        • aleph_minus_one 1 hour ago
          > is there anything uniquely awful with Altman?

          His involvement in Worldcoin (now named "World"), i.e. biometric scanning of huge populations.

        • 9dev 1 hour ago
          Besides the fact that he's an especially awful specimen (the lying and manipulation alone made it to the news several times), I just don't think that a rather clear-sighted blog post from 12 years ago is a valid reason to change your views about Altman.
      • trilogic 1 hour ago
        I am the last person on earth who would ever write a positive thing about Altman, but can´t lie neither, a fact is a fact, there available to everyone. Fair is fair.
    • imiric 1 hour ago
      There is some good insight here, but I wouldn't say that he "nailed it".

      We still don't have computer programs that are able to "decide" what "they" "want" to do. We have programs that can mimic this behavior, but the implementation is effectively the same as the chess and flight programs we've had for decades: searching a gigantic solution space very quickly. What's changed is the amount of data and compute we can throw at the problem.

      The emergent behavior we observe from these systems is the result of our human inability to comprehend the relationships and patterns in the vast amount of data we feed them. We assign anthropomorphic qualities like creativity, intelligence, reasoning, thinking, etc., to this behavior in an attempt to make the technology more approachable, and, of course, more marketable, which fuels further investments.

      What's very much uncertain is whether continuing to scale up will lead us to machines that can do all of the things Altman talks about. There's disagreement about this even between leading figures in the field, but being negative about it is not as profitable.