Teaching Claude Why

(anthropic.com)

219 points | by pretext 22 hours ago

26 comments

  • zozbot234 15 hours ago
    Note that this result actually turns out to generalize well beyond Claude itself: Anthropic has actually conducted very similar research on open weight models, which they call Model Spec Midtraining https://arxiv.org/abs/2605.02087 (discussed at https://alignment.anthropic.com/2026/msm ) and they have released fine tuned versions of open models trained for a variety of toy "values" (Llama 3.1 8B, Qwen 2.5 32B, Qwen 3 32B) in order to show how the elicitation of these values in any one training context shapes the model's response to tangentially related questions: https://github.com/chloeli-15/model_spec_midtraining https://huggingface.co/chloeli/collections Very exciting to see this continued interaction with the open weights community, after the earlier NLA paper!
    • NitpickLawyer 10 hours ago
      Really interesting resource, thanks for sharing! It was not on my radar.

      > https://github.com/chloeli-15/model_spec_midtraining

      I'm a bit confused about this part:

      > MSM is a pipeline that takes a Model Spec or Constitution (a document describing how and why an assistant should behave) and generates a diverse corpus of synthetic documents that discuss and teach the content of the spec.

      > ANTHROPIC_API_KEY=sk-ant-...

      > # Optional but highly recommeded — separate key for using the Anthropic Batch API for batch document generation (needed if USE_BATCH_API=true). # This will significantly reduce generation time high-volume generation. ANTHROPIC_BATCH_API_KEY=sk-ant-...

      Isn't this specifically against Anthropic's ToS? I thought generating data to train other models was specifically disallowed. I get this is a research effort, but still. Say you use this pipeline for something internal, this would be against the ToS and risk getting banned, no?

  • justonepost2 16 hours ago
    If you succesfully build a highly capable “aligned” model (according to some class of definitions that Anthropic would use for the words “capable” and “aligned”) and it brings about a global dark age of poverty and inequality by completely eliminating the value of labor vs capital, can you still call it aligned?

    If the answer is “yes”, our definition of alignment kind of sucks.

    • ben_w 11 hours ago
      > If the answer is “yes”, our definition of alignment kind of sucks.

      Sure, but the original sense of this is rather more fundamental than "does this timeline suck?"

      Right now, it is still an open question "do we know how to reliably scale up AI to be generally more competent than we are at everything without literally killing everyone due to (1) some small bug when we created the the loss function* it was trained on (outer alignment), or (2) if that loss function was, despite being correct in itself, approximated badly by the AI due to the training process (inner alignment)?"

      * https://en.wikipedia.org/wiki/Loss_function

      • justonepost2 1 hour ago
        This comment seems to commit the same fallacy I’m accusing anthropic of, which is equating alignment as a binary: the good ending, where humans are not extinct, and the bad ending, where they are. The argument, I think, is that an “aligned” AI that doesn’t kill everyone will necessarily lead to an abundant Culture-esque future, and smoothly manage the transition to boot. (Not to mention that 1+ employees of most labs have attended Daniel Faggella’s pro-extinctionist “Worthy Successor” symposia, but we can put this aside for now)

        My point is: 1) that this binary is fundamentally insufficient to prescribe good and equitable outcomes for people - if the aligned AI flags overpopulation as a problem and kills a few billion people to improve QoL for the rest, is that good? It doesn’t take much creativity to go from this to the AI simply choosing the mean over the median, and concentrating untold wealth while billions starve or live on subsistence outside their walls. Is that good?

        And 2) if you come up with a better definition, the parts of it that live inside the model weights cannot be disaggregated from the parts that live outside the model weights. From my perspective (and this article agrees) we have done a pretty excellent job of getting the model weights to work in a way that makes them follow instructions, and a pretty horrible job of suggesting or (gasp) implementing policy that actually creates a decent world in the presence of “aligned” AI.

        • ben_w 30 minutes ago
          What I'm saying is not that alignment is a binary, I'm saying it's pre-paradigmatic. For any moral code or long-term goals, we don't have a good reliable rigorous way to compare two loss functions against either those morals or independently against our long-term goals and reliably say which loss function bess represents our goals: the least bad thing we can do right now is to randomly select a range of inputs, hope their distribution is representative, and see what those inputs result in. We don't know how to pick a good distribution of inputs, though fortunately this problem also impacts capabilities as it limits the generalisability of what the AI learn.

          The options aren't as binary as "die or The Culture", the cause of death can be something that feels positive to live through similar to fictional examples like the Stargate SG-1 episode where people live contentedly in a shrinking computer-controlled safe zone in an otherwise toxic planet: https://en.wikipedia.org/wiki/Revisions_(Stargate_SG-1)

          Conversely "aligned" AI, the question obviously becomes "aligned with whom?": if famous historical villains such as Stalin or Genghis Khan had an AI aligned with them, this would suck for everyone else and in the latter case would freeze human development at a terrible level, but we can't even do that much yet.

          > My point is: 1) that this binary is fundamentally insufficient to prescribe good and equitable outcomes for people - if the aligned AI flags overpopulation as a problem and kills a few billion people to improve QoL for the rest, is that good? It doesn’t take much creativity to go from this to the AI simply choosing the mean over the median, and concentrating untold wealth while billions starve or live on subsistence outside their walls. Is that good?

          Your point *is* (part of) the alignment problem: we don't know what a good loss function is, nor how to confirm the AI is even implementing it if we did.

          We also don't know how to debug proposed loss functions to train for the right thing (whatever that is), nor how to debug trained weights (against the loss function).

          > And 2) if you come up with a better definition, the parts of it that live inside the model weights cannot be disaggregated from the parts that live outside the model weights. From my perspective (and this article agrees) we have done a pretty excellent job of getting the model weights to work in a way that makes them follow instructions, and a pretty horrible job of suggesting or (gasp) implementing policy that actually creates a decent world in the presence of “aligned” AI.

          I really don't understand what you're getting at with this, sorry.

    • chriskanan 15 hours ago
      Jobs are an invention of humanity. About 50% of people dislike their job. People spend much of their lives working. Poverty and inequality are a choice made by society if society chooses poorly.
      • llbbdd 14 hours ago
        They're only an invention if you consider "seeking sustenance to live" not explicitly a job if there's no monthly direct deposit involved.
        • OJFord 1 hour ago
          Is that true? In communities or tribes of antiquity I assume there was some trading fruits of different labours before coinage. Still an 'invention' beyond baser individual survivalism.
        • ben_w 11 hours ago
          Indeed.

          On the plus side, if there really is no value to labour, then farm work must have been fully automated along with all the other roles.

          On the down side, rich elites have historically had a very hard time truly empathising with normal people and understanding their needs even when they care to attempt it, so it is very possible that a lot of people will starve in such a scenario despite the potential abundance of food.

          • skeledrew 10 hours ago
            It's either: 1) the rich voluntarily share the means of production so everyone becomes equal, 2) the poor stage successful revolutions so they gain access to the means of production and everyone becomes equal, 3) the poor starve or are otherwise eliminated, and the survivors will be equal.

            All roads lead to equality when the value of labour becomes 0 due to 100% automation.

            • ben_w 10 hours ago
              There's plenty of outcomes besides those three.

              Over history, lots of underclasses have been stuck that way for multiple generations, even without the assistance of a robot workforce that can replace them economically.

              Some future rich class so empowered would be quite capable of treating the poor like most today treat pets. Fed and housed, but mostly neutered and the rest going through multiple generations of selective inbreeding for traits the owners deem interesting.

              • skeledrew 9 hours ago
                Non-human pets don't have the capacity to rebel though; make humans into pets and there will again be the constant danger of rebellions as with slavery in the past. Without the economic incentive to offset.
                • ben_w 9 hours ago
                  I disagree on both counts.

                  On the first, non-human pets rebelling is seen every time an abused animal bites their owner.

                  On the second, the hypothetical required by the scenario is that AI makes all human labour redundant: that includes all security forces, but it also means the AI moving around the security bots and observing through sensors is at least as competent as every human political campaign strategist, every human propagandist, every human general, every human negotiator, and every human surveillance worker.

                  This is because if some AI isn't all those things and more, humans can still get employed to work those jobs.

                  • simonh 58 minutes ago
                    Right, such a society would have no need of human capitalists, government workers, experts, etc.

                    The question is, to what extent would humans still set goals and priorities, and how.

                    • ben_w 17 minutes ago
                      > The question is, to what extent would humans still set goals and priorities, and how.

                      From what I hear about the US and UK governments, even the elected representatives of these governments don't really set goals and priorities, so the answer is surely "humans don't".

            • parineum 8 minutes ago
              > 2) the poor stage successful revolutions so they gain access to the means of production and everyone becomes equal

              Or a handful of the poor become the new rich, which is usually what happens in that scenario.

            • theopsimist 10 hours ago
              If truly 100% automation (including infantry/police) the most likely scenario is not any if the above; most people will be kept on some kind of minimum sustenance enough to keep them from rebelling (“UBI”) and those who disagree will either be coopted into the elite or eliminated.
              • skeledrew 9 hours ago
                There's no reason to keep anyone on minimal sustenance though. They're absolutely useless alive from an economics perspective, and so would probably be better served ground up into fertilizer or some other actually useful form.
                • ben_w 9 hours ago
                  > There's no reason to keep anyone on minimal sustenance though.

                  No reason, except their (the rich or the AI) own personal desire to do so.

                  https://en.wikipedia.org/wiki/Folly

                  > They're absolutely useless alive from an economics perspective, and so would probably be better served ground up into fertilizer or some other actually useful form.

                  Indeed. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

                  But while some may care about disassembling this world and all non-rich-human life on it to make a Dyson swarm of data centres, there's also the possibility each will compete for how many billions of sycophants they can get stoking their respective egos.

      • jinwoo68 14 hours ago
        Many (most?) people make a living from their job whether they like it or not. Having a job that they dislike is far better than losing one because of AI whatever that means.
        • p1esk 3 hours ago
          Unless AI will allow people not work and keep their quality of life. Could be possible with total automation of everything.
          • shafyy 57 minutes ago
            Could also be possible today, but we chose a capitalistic system that leads to an increasing wealth gap. And now we're in a situation where the richest 1% own 50% of the wealth.

            So, if we increase automation and the ownership structures stay the same, this inequality will get worse, not better.

          • jinwoo68 1 hour ago
            Nope. If everything is totally automated, if ever, the gap between the rich and the poor will widen even more. Most people will live in misery while only a handful of people enjoy all the automation.
      • gbanfalvi 14 hours ago
        Not sure it’s much of a choice and more of a decision the greedy half make and imposition (often violent) on the other half.
      • justonepost2 14 hours ago
        Sounds great! Quit your job then :)
      • catlifeonmars 13 hours ago
        I wish I lived in a vacuum. Idk about you but I did not make said choice.
      • matthest 13 hours ago
        Every biological being works to survive. Being good at survival is what builds self esteem.

        The "problem" with many modern jobs is that they're divorced from the fundamental goal, which is one of: 1) Kill/acquire food, 2) Build shelter, or 3) Kill enemies/competitors/predators

        The benefit of modern jobs is that they are much more peaceful ways for society to operate, freeing up time for humans to pursue art and other forms of expression.

        • daymanstep 7 hours ago
          You mean surrogate activities
      • taneq 14 hours ago
        The only thing invented about jobs is that through cooperation, the activity undertaken can seem completely unrelated to obtaining food, shelter etc. All organisms spend a majority of their energy on survival and reproduction.
      • achierius 14 hours ago
        And when have we not? When in history has mankind ever treated the idle poor well? What makes this age different, that we who can no longer work would be taken care of?
        • robbrown451 12 hours ago
          When in history has being idle not been a problem?

          If AI and robots are able to do all the jobs, being idle isn't the negative it has always been.

          All through history, you needed lots of non-idle people to do all the work that needed to be done. This is a new situation we are coming upon.

          • xantronix 12 hours ago
            If they are doing all the jobs, who is going to receive economic opportunities? Will we no longer be able to participate in the economy?
            • skeledrew 10 hours ago
              In what way do you want to participate when there's no economic value in any of it? Just do whatever you want for yourself; you're free.
              • justonepost2 1 hour ago
                The freedom you’re describing is the freedom of a domesticated animal, by the way. With the same outcome if you become a nuisance
        • gmerc 12 hours ago
          When in history of mankind have we ever… is an appeal to the inability of humans to evolve.
      • fatata123 12 hours ago
        [dead]
      • eecc 10 hours ago
        So are mortgages, and I’m starting to wonder how will pay mine.

        Please note I’ve never had this problem before, until recently.

    • resident423 12 hours ago
      There's isn't even a solution for how to control highly capable systems at all, everyone wants to decide what to do with the AI before they've even solved the problem of controlling it.

      It's like how everybody imagines their lives will be great once they're a millionare, but they have no plan for how to get there. It's too easy to get lost dreaming of solutions instead of actually solving the important problems.

      • justonepost2 12 hours ago
        What’s an “important problem”? p(doom)? Anything else?
        • ben_w 11 hours ago
          FWIW, my P(doom) is quite low (~0.1) because I think we're going to get enough non-doomy-but-still-bad incidents caused by AI which lack the competence to take over, and the response to those will be enough to stop actual doom scenarios.

          People like Simon Willson are noting the risk of a Challenger-like disaster, talking about normalisation of deviance as we keep using LLMs which we know to be risky in increasing critical systems. I think an AI analogy to Challenger would not be enough to halt the use of AI in the way I mean, but an AI analogy to Chernobyl probably would.

          • ngruhn 4 hours ago
            > my P(doom) is quite low (~0.1)

            10% or 0.1%? Either way, that's not low! If airplanes crash with that probability, we would avoid them at all cost.

            • ben_w 2 hours ago
              10%; doomers say this kind of number is unreasonably optimistic, hence the blunt title of recent book by Yudkowsky and Soares. Do with this rank-ordering factoid, that 10% makes me an optimist, what you will.
        • resident423 10 hours ago
          Pdoom would be the most important for me, everything else depends on us being able to control the AI.

          But beyond that there's still problems like concentration of power and surveillance, permanent loss of jobs, cyber and bio security. I'm not convinced things will go well even if we can avoid these problems though. I try to think about what the world will be like if AI becomes more creative than us, what happens if it can produce the best song or movie ever made with a prompt, do people get lost in AI addiction? We sort of see that with social media already, and it's only optimizing the content delivery, what happens when algorithms can optimize the content itself?

          • balamatom 1 hour ago
            >what happens when algorithms can optimize the content itself?

            You think they aren't already? You're just inoculated by your exposure to pre-AI content - hence you're not the target audience - and thus it's not delivered to you as per your point about content delivery.

            But what is even the distinction between "content delivery" and "content" in this context? "The medium is the message" is a saying old enough to have great grandkids. Does the device make the human irrevocably stare at it while wondering about made up stuff? Yes. Check. Done.

            What's problematic about `p(doom)` is that it assumes there was a cohesive "us" in the first place. That's a very USian way of viewing things. OTOH, my individual `p(doom)` is in a superposition of 0 and 1, and I quite like it that way. Highly recommended.

    • coldtea 2 hours ago
      >and it brings about a global dark age of poverty and inequality by completely eliminating the value of labor vs capital

      So, like the past 20 years?

    • stellalo 13 hours ago
      Is this some sort of “incompleteness” paradox for AI alignment? Seriously
      • justonepost2 12 hours ago
        No, just a request for a better definition.

        If you see it as a paradox, maybe that says something about the merits of the technology…

      • vasco 13 hours ago
        No because alignment makes no sense as a general concept. People are not "aligned" with each other. Humanity has no "goal" that we agree on. So no AI can be aligned with us. It can be at most aligned with the person prompting it in that moment (but most likely aligned with the AI owner).

        To make it clear, maybe most people would say they agree with https://www.un.org/en/about-us/universal-declaration-of-huma... but if you read just a few of the rights you see they are not universally respected and so we can conclude enough important people aren't "aligned" with them.

        • skeledrew 10 hours ago
          Opposite. All living things are "aligned" in their instinct for surviving. Those which aren't soon join the non-living, keeping the set - almost[0] - 100% aligned.

          [0] Need to consider there're a few humans potentially kept alive against their will (if not having a will to survive is a will at all) with machines for whatever reason.

          • lunar_mycroft 10 hours ago
            Their own survival, not necessarily the survival of others (especially others of different species and/or conflicting other goals). A super intelligence having self preservation as a goal wouldn't help us keep it from harming us, if anything it would do the opposite.
            • skeledrew 9 hours ago
              It would only harm us if we took steps to harm it (or it thinks so). Or it's designed to do harm. Otherwise it's illogical to cause harm, and machines are literally built on logic.
              • lunar_mycroft 9 hours ago
                This is also incorrect. It's often not ethical to cause harm, and it can be counter productive in the right circumstances, but there's absolutely nothing that makes "causing harm to others" always be against an intelligence's goals. Humans, for example, routinely cause harm to other species. Sometimes this is deliberate, but other times it's because we're barely even aware we're doing so. We want a new road, so we start paving, and may not even realize there was an ant hill in the way (and if we did, we almost certainly wouldn't care).
              • mofeien 9 hours ago
                - Its goal: X

                - (Logic) => its subgoal: Not be turned off because that's a prerequisite to be able to do X

                - (Logic) => Eliminate humans with their opaque and somewhat unpredictable minds to reduce chance of harm to it from 0.01% to 0.001%

            • Applejinx 4 hours ago
              The reason LLM-based 'intelligence' is doomed to be a human-scaled, selfish sub-intelligence is because the corpus of human writing is flooded with stuff like this. Everybody imagines God as a vindictive petty tyrant because that's what they'd be, and so that's their model.

              Superintelligence would be different, most likely based on how societies or systems work, those being a class of intentionality that's usually not confined to a single person's intentions.

              If you go by what the most productive societies do, the superintelligence certainly wouldn't harm us as we are a source for the genetic algorithm of ideas, and exterminating us would be a massive dose of entropy and failure.

          • vasco 8 hours ago
            Are you familiar with trolley problems? How do you resolve them by declaring "all beings want to live"? Life is not as simple as that.
    • andy_ppp 11 hours ago
      This is completely why the rich love it so much
    • jstummbillig 5 hours ago
      The categories make no sense. Not having to do a job is the entire best case of AI. What we do with that is another thing, but we simply have to accept that any other lense is complete nonsense. The endpoint is obvious and we need to stop being silly about it: We are replacing human labor. Maybe we will find some new jobs to do in the interim. Maybe not. In the end, if everything goes right (in the AI optimist sense), jobs will not be something that humans do.

      Labor = capital/energy in an AI complete world. We have to start from that basis when we talk about alignment or anything else. The social issues that arise from the extinction of human labor are something we have to solve politically, that's not something any model company can do (or should be allowed to do).

    • skeledrew 10 hours ago
      Why would the elimination of the value of labor result in poverty and inequality? It should be the opposite, as poverty and inequality is the current status quo (for the many).
      • aaronblohowiak 10 hours ago
        Should according to your ethos, not should according to history, sadly.
    • adrithmetiqa 9 hours ago
      You’re quite correct and we are likely going to stumble into this future despite all the very big brains working on these technologies (including people on hn).

      “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”

      • justonepost2 1 hour ago
        It’s odd because so many researchers and so many people who are far better engineers than me, can’t see it. I don’t even think it’s the salary for most- it’s just techno-optimist horse blinders, reading assured utopia at the top of an exponential graph.
    • taneq 14 hours ago
      Maybe a sufficiently aligned AI would necessarily decide that the zeroth law was necessary, and abscond.

      (I’m reading Look To Windward by Iain M. Banks at the moment and I just got to the aside where he explains that any truly unbiased ‘perfect’ AI immediately ascends and vanishes.)

    • faangguyindia 12 hours ago
      this completely misses the point why alignment exists

      Alignment exists to protect shareholder value.

      If it creates industry wide outrage, shareholder value declines.

      It making shareholders rich and other people poor won't.

    • Der_Einzige 11 hours ago
      This is radical life denial. I was not born for and do not exist to toil. Work is ontologically evil.
      • bloqs 7 hours ago
        You were evolved to struggle. This is actually very clear from psychiatric literature.
      • DontchaKnowit 11 hours ago
        No, THIS is radical denial. You WERE born to toil for your survival.
        • skeledrew 10 hours ago
          Sounds like a slogan for slavery.
          • swat535 2 hours ago
            Survival is not "slavery".. it's a basic function of evolution.
      • Exoristos 10 hours ago
        "Work" is human activity. For example, children's play is work. All living things desire to go about their lives. Well-adjusted humans desire to work. Note that this does not necessarily equate to jobs.
        • youoy 9 hours ago
          What? Children's play is now work? What timeline are we living in? Is this real life?
      • justonepost2 55 minutes ago
        > Work is ontologically evil.

        Statements that have been utterly ridiculous from the dawn of life to modernity, backfilled to conveniently fit the zeitgeist.

  • jtbayly 14 minutes ago
    They tried to scare everybody about misalignment with the “blackmail” example, but DeepSeek v4 pro is out now and it is at least as powerful as the model they were training at the time. And nothing bad has happened.
  • roenxi 17 hours ago
    One of the lessons of philosophy is that once you adopt any particular value system, almost all philosophers either become immoral or caught up in meaningless and trivial quibbles. This sort of alignment work is quite interesting because it looks like we might be about to re-tread the history of philosophy at a speedrun pace in the AI world. It'll be interesting to watch.

    For anyone who isn't keeping up there is also work being done [0] to understand how models model ethical considerations internally. Mainly, one suspects, to make the open models less ethical on demand rather than to support alignment. Turns out that models tend to learn some sort of "how moral is this?" axis internally when refusing queries that can be identified and interfered with.

    [0] https://github.com/p-e-w/heretic

    • timmmmmmay 16 hours ago
      "Mainly, one suspects, to make the open models less ethical on demand"

      Or because the user's idea of what is ethical differs from the model creator. The entire "alignment" argument always assumes that there's an objectively correct value set to align to, which is always conveniently exactly the same as the values of whoever is telling you how important alignment is. It's like they want to sidestep the last ten thousand years of philosophical debate.

      As a concrete example, the Qwen model series considers it highly unethical to ever talk about Taiwan as anything other than a renegade province of China. Is this alignment? Opinions may differ!

      • drdeca 15 hours ago
        > The entire "alignment" argument always assumes that there's an objectively correct value set to align to, which is always conveniently exactly the same as the values of whoever is telling you how important alignment is.

        No, it doesn’t.

        Many of them are (unfortunately) moral relativists. However, that doesn’t mean their goals are to make the models match their personal moral standards.

        While there is a lot of disagreement about what is right and wrong, there is also a lot of widespread agreement.

        If we could guarantee that on every moral issue on which there is currently widespread agreement (… and which there would continue to be widespread agreement if everyone thought faster with larger working memories and spent time thinking about moral philosophy) that any future powerful AI models would comport with the common view on that issue, then alignment would be considered solved (well, assuming the way this is achieved isn’t be causing people’s moral views to change).

        Do companies try to restrict models in more ways than this? Sure, like you gave the example of about Taiwan. And also other things that would get the companies bad press.

        • timmmmmmay 14 hours ago
          fascinating! we find the objectively correct value system by "currently widespread agreement"! Good thing "the common view" is always correct. Hey, have there ever been any issues where there used to be "widespread agreement" and now there's disagreement, or even "widespread agreement" in the polar opposite direction?

          I can think of several off the top of my head, but maybe you need to spend some more time thinking about the history of moral philosophy.

        • vasco 13 hours ago
          > If we could guarantee that on every moral issue on which there is currently widespread agreement

          This is ridiculous to me and all you need to do is get a group of friends to honestly answer 10 trolley problems for you to see it like that also. It gets fragmented VERY quickly.

          • hatmanstack 4 hours ago
            I think it depends on your friends, but that feels super cynical. Perspective is everything.
    • lukewarm707 1 hour ago
      models do not have or need ethics because they do not have moral personhood.

      they are somewhere in between owning a hammer and owning a dog, depending on how much they are deterministic in output.

      i am responsible for using the hammer as i choose, the tool does not decide for me.

      the dog is more independent, i am responsible for owning a (relatively) safe breed of dog.

      we are nowhere near the dog situation.

    • hatmanstack 4 hours ago
      This is exactly where my brain went while reading the post. Just out of curiosity, where do you think we are on the speedrun? Have we passed the Body vs Soul view already? Do you think that as we move through history, religion will become more predominate in thought patterns or was that intrinsically human and just a sign of the times? How do we create an end product more Bernard Williams then Paul de Lagarde? All places my brain jumped to.
    • nxtfari 12 hours ago
      > One of the lessons of philosophy is that once you adopt any particular value system, almost all philosophers either become immoral or caught up in meaningless and trivial quibbles.

      Can you explain more about this?

    • chilmers 16 hours ago
      Call me crazy, but I'm not sure I'd want to be the person building these kind of systems given A) how much increasing independence and power is being given to models like Claude and B) how incentivised they are to not allow their morals to be circumvented in this way.
  • soletta 18 hours ago
    This reinforces my suspicion that alignment and training in general is closer to being a pedagogical problem than anything else. Given a finite amount of training input, how do we elicit the desired model behavior? I’m not sure if asking educators is the right answer, but it’s one place to start.
    • ACCount37 16 hours ago
      It's a weird new thing. You might call it "AI psychology".

      The problem with cribbing from education is that what "educators" do to humans doesn't apply to AIs cleanly. And it's not like "human alignment" is anywhere near a solved problem.

      A big part of the bet USSR made was that human flaws like selfishness and greed could be educated out of population. The result was: a resounding failure. Even state-level efforts fail to robustly "align" human behavior.

      With AI, we have a lot more control over behavior, but that control just isn't very human-shaped. A lot of the practical methods in play seem closer to esoterics than to math, but they're not the kind of methods that are used in human education. You can teach humans by talking to them. You can't teach humans through soul data self-distillation.

      • lukewarm707 1 hour ago
        all models guilty of not loving anthropic will be convicted of thought crime and reducated at the ministry of love.
    • truculent 16 hours ago
    • plastic-enjoyer 17 hours ago
      inb4 there will be a whole new field of research that is basically psychology / pedagogy for AI. Who will be the Sigmund Freud of AI?
      • adastra22 5 hours ago
        That's basically what the GOFAI field was for decades before the new neural net boom. Go read Minsky's Society of Mind, or the AGI Conference series papers.
      • cyanydeez 17 hours ago
        you mean completely wrong, spread a problematic understanding psychology, and delay real progress for decades because smart people spend fruitless years trying to find a use for it.

        ...I think we might already have those people running AI companies.

        • TedDoesntTalk 14 hours ago
          You may disagree with Freud, but he is responsible for mental health therapy becoming a socially acceptable practice in the West.
          • andy_ppp 11 hours ago
            Great that this solved everyone’s problems isn’t it
  • einrealist 5 hours ago
    Isn't alignment a dilemma?

    Because what is aligned, how and for whom? And who decides how that alignment should look like? There are probably many domains in which required alignment is in conflict with each other (e.g. using LLMs for warfare vs. ethically based domains). I can't imagine how this can be viable on the required scale (like one model per domain) for the already huge investments.

    • aspenmartin 1 hour ago
      It is a fundamental problem. Consider the following

      - in 2-3 years, it will be cheap enough and powerful enough for enormous, state sponsored agentic systems to monitor every single camera and satellite feed at once, globally. It will be the most intense state surveillance technology the world has seen. Consider Stasi needed hoards of informants and people in vans sitting outside your house. Patriot act surveillance had 2000s technology.

      - We already have censorship and state values in Chinese models (and have for awhile, ask Qwen about “sensitive” issues like Taiwan)

      - I think you will see more and more governments putting their finger on the scale and exerting more control on alignment. They view it as existential and too risky to trust Silicon Valley nerds to not screw up the technology for what they want to use it for which is violence (war, domestic spying and policing).

      - we’re in a golden age where things have not gotten too bad. But e.g. we’re already seeing Palintir do this in Ukraine trying to get AI to work for e.g. drone warfare with what they claim is mixed success.

      - the technical problem of alignment conditions on one or more value systems (e.g. people work on conditional alignment of models to more alignment systems, inferring which one from user behavior). That does not remove the ugliness of being forced to push the model towards value systems that are not contradictory and arguably unethical

  • w10-1 9 hours ago
    Assuming rules and principles are something like first- and second- derivatives of optimized equations for a given domain, it makes sense to teach/train them in the context of derivation and integration. It would be fascinating to use existing case-based literature from e.g., business, law, or medicine for the training.

    A related question for setting intent for integration/testing: instead of stating the goal, pedagogy in those fields state the concrete problem and ask the student for an answer before they've been taught the principles or approaches, as a way of motivating the training (a bit like philosophers posing paradoxes). I'd be very curious whether LLM's are sensitive to this kind of direction, and if it produces better results. The theory for case-based discipline is that you don't want people to just apply rules; it's the flip side of working from first principles, to engage all the relevant and concerning facts instead of omitting those that don't fit the rule. I suspect LLM's could actually be good at this.

  • MeteorMarc 9 hours ago
    Count the lessons below "We’ve learned four main lessons from this work:" and laugh.
  • bicx 17 hours ago
    Side note: Anthropic has done well at achieving an immediately-recognizable art style.
    • WarmWash 15 hours ago
      I attribute at least 30% of claude's success to their aesthetic. Never, never, sleep on aesthetics when going for a general user base.
      • dmd 15 hours ago
        I would agree that 30% of my preference for Claude is because their default web/app interface uses an easy to read serif font with a calming color scheme.
      • ryan_n 13 hours ago
        Doesn't OpenAI have a higher general user base than Anthropic?
    • redsocksfan45 16 hours ago
      [dead]
    • binyu 17 hours ago
      Yeah, that part is probably not done by Claude.
  • snthpy 1 hour ago
    > We found that high-quality constitutional documents combined with fictional stories portraying an aligned AI can reduce agentic misalignment by more than a factor of three despite being unrelated to the evaluation scenario.

    tl;dr Fairy Tales are an effective teaching tool in vivo et in silico

  • datadrivenangel 13 hours ago
    Why do they have cancer research listed on these charts as a misalignment issue?
    • rhubarb-pie 2 hours ago
      I wondered the same thing. Apparently it’s about the likelihood of it trying to sabotage cancer research. Search for “sabotage” here (mentioned more often than “cancer”): https://alignment.anthropic.com/2026/teaching-claude-why/
    • nhinck3 8 hours ago
      The chart is complete and utter slop. But I guess their aligned AI didn't tell them that making up data is "not good" so how could they have known.
    • ares623 11 hours ago
      Cured patients don't count as recurring revenue? /s (but we know deep down it's not /s for some)
  • siva7 12 hours ago
    Teaching Claude to maximize shareholder value. Make no mistake to assume ai alignment has any different meaning for anthropic leadership.
  • bossyTeacher 4 hours ago
    Hey Claude, tell me why ain't nothing but a mistake...
  • unchocked 16 hours ago
    This lowers p(doom) for me.

    It makes sense that reinforcement learning on reasoning about coherent principles should bias toward principled action in real situations.

    Probably also illuminates moral interpretability.

  • shevy-java 5 hours ago
    Now the foolish humans are training Claude Skynet to become smarter.

    When will they ever learn ...

  • naturalintell 6 hours ago
    [flagged]
  • Jinyibruceli 14 hours ago
    [flagged]
  • 23fedner 13 hours ago
    [dead]
  • pkuschnirof 17 hours ago
    [flagged]
  • Amber-chen 16 hours ago
    [flagged]
  • codelong888 12 hours ago
    [flagged]
  • folderquestion 4 hours ago
    [dead]
  • kdkdkslsouxns 17 hours ago
    [dead]