I was banned from Claude for scaffolding a Claude.md file?

(hugodaniel.com)

145 points | by hugodan 1 hour ago

29 comments

  • bastard_op 5 minutes ago
    I've been doing something a lot like this, using a claude-desktop instance attached to my personal mcp server to spawn claude-code worker nodes for things, and for a month or two now it's been working great using the main desktop chat as a project manager of sorts. I even started paying for MAX plan as I've been using it effectively to write software now (I am NOT a developer).

    Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.

    Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.

    I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.

    I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".

  • omer_balyali 8 minutes ago
    Similar thing happened to me back in November 19 shortly after GitHub outage (which sent CC into repeated requests and time outs to GitHub) while beta testing Claude Code Web.

    Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.

    Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.

    As their ads say: "Keep thinking. There has never been a better time to have a problem."

    I've been thinking since then, what was the problem. But I guess I will "Keep thinking".

  • landryraccoon 44 minutes ago
    This blog post feels really fishy to me.

    It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.

    For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.

    • swiftcoder 35 minutes ago
      > It should have been straightforward for the author to excerpt some of the prompts he was submitting

      If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.

    • foxglacier 3 minutes ago
      It doesn't even matter. The point is you can't just use SAAS product freely like you can use local software because they all have complex vague T&C and will ban you for whatever reason they feel like. You're force to stifle your usage and thinking to fit the most banal acceptable-seeming behavior just in case.

      Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.

    • ta988 2 minutes ago
      There will always be the "ones" that come with their victim blaming...
  • areoform 1 hour ago
    I recently found out that there's no such thing as Anthropic support. And that made me sad, but not for reasons that you expect.

    Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

    I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

    I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

    • eightysixfour 41 minutes ago
      > Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

      I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.

      Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

      • swiftcoder 39 minutes ago
        > shows a profound gap between what people think working in customer service is like and how fucking hard it actually is

        Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)

        • Terr_ 6 minutes ago
          IMO we can augment this critique by asking what the "AI" did that impressed them in the first place, ex:

          1. "These tools are dramatic when applied to tasks I consider common, like writing memos and skimming e-mails."

          2. "Therefore these tools a Big Thing."

          3. "But obviously my job won't be affected. Yes, I write a lot of memos and skim a lot of emails, but my real job is Leadership."

          4. "Therefore this is gonna be big for replacing totally easy stuff like customer support and whatever it is they do."

        • eightysixfour 25 minutes ago
          I was closer to upper-middle management and executives, it could have done the things I did (consultant to those people) and that they did.

          It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.

        • 0xferruccio 28 minutes ago
          to be fair at least half of the software engineers i know are facing some level of existential crisis when seeing how well claude code works, and what it means for their job in the long term

          and these are people are not junior developers working on trivial apps

          • swiftcoder 14 minutes ago
            Yeah, I've watched a few peers go down this spiral as well. I'm not sure why, because my experience is that Claude Code and friends are building a lifetime of job security for staff-level folks, unscrewing every org that decided to over-delegate to the machine
        • pinkmuffinere 26 minutes ago
          Perhaps even more-so given the following tagline, "Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems", lol. I suppose it's possible eightysixfour is an upper-middle management executive though.
          • eightysixfour 24 minutes ago
            Consultant to, so yes. It could have replaced me and a ton of the work of the people I was supporting.
            • pinkmuffinere 16 minutes ago
              Ah I see, that definitely lends some weight claim then.
      • danielbln 30 minutes ago
        There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.
        • eightysixfour 18 minutes ago
          Yes but no. Do you know how many people call support in legacy industries, ignore the voice prompt, and demand to speak to a person to pay their recurring, same-cost-every-month bill? It is honestly shocking.

          There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.

    • lukan 55 minutes ago
      I would say it is a strong sign, they do not trust their agent yet, to allow them significant buisness decisions, that a support agent would have to do. Reopening accounts, closing them, refunds, .. people would immediately start to try to exploit them. And will likely succeed.
      • atonse 51 minutes ago
        My guess is that it's more "we are right now using every talented individual right now to make sure our datacenters don't burn down from all the demand. we'll get to support soon once we can come up for air"

        But at the same time, they have been hiring folks to help with Non Profits, etc.

    • csours 16 minutes ago
      Human attention will be the luxury product of the next decade.
    • WarmWash 51 minutes ago
      Claude is an amazing coding model, its other abilities are middling. Anthropic's strategy seems to be to just focus on coding, and they do it well.
      • embedding-shape 47 minutes ago
        > Anthropic's strategy seems to be to just focus on coding, and they do it well.

        Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview

        • WarmWash 31 minutes ago
          Anthropic isn't bothering with image models, audio models, video models, world models. They don't have science/math models, they don't bother with mathematics competitions, and they don't have open model models either.

          Anthropic has claude code, it's a hit product, SWE's love claude models. Watching Anthropic rather than listening to them makes their goals clear.

        • Ethee 42 minutes ago
          Isn't this the case for almost every product ever? Company makes product -> markets as widely as possible -> only niche group become power users/find market fit. I don't see a problem with this. Marketing doesn't always have to tell the full story, sometimes the reality of your products capabilities and what the people giving you money want aren't always aligned.
      • 0xbadcafebee 35 minutes ago
        Critically, this has to be their play, because there are several other big players in the "commodity LLM" space. They need to find a niche or there is no reason to stick with them.

        OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.

        Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.

      • arcanemachiner 39 minutes ago
        Interesting. Would anyone care to chime in with their opinion of the best all-rounder model?
        • WarmWash 35 minutes ago
          You'll get 30 different opinions and all those will disagree with each other.

          Use the top models and see what works for you.

    • furyofantares 21 minutes ago
      > I recently found out that there's no such thing as Anthropic support.

      The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.

      • kmoser 11 minutes ago
        If you want to split hairs, it seem that Anthropic has support as a noun but not as a verb.
    • magicmicah85 34 minutes ago
      https://support.claude.com/en/articles/9015913-how-to-get-su...

      Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.

    • munk-a 37 minutes ago
      > They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

      Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.

      I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.

      • throwawaysleep 22 minutes ago
        > to send their most frustrated customers through a chatbot

        But do those frustrated customers matter?

        • munk-a 18 minutes ago
          I just checked - frustrated customers isn't a metric we track for performance incentives so no, they do not.
          • throwawaysleep 0 minutes ago
            Even if you do track them, if 0.1% of customers are unhappy and contacting support, that's not worth any kind of thought when AI is such an open space at the moment.
    • throwawaysleep 23 minutes ago
      Eh, I can see support simply not being worth any real effort, i.e. having nobody working on it full time.

      I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.

      It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.

      > I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

      Are there enough people who need support that it matters?

  • cortesoft 1 hour ago
    I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing.

    I think I kind of have an idea what the author was doing, but not really.

    • Aurornis 32 minutes ago
      Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

      Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

      There are so many things about this article that don't make sense:

      > I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

      I can't even understand what they're trying to communicate. I guess they're referring to Google?

      There is, without a doubt, more to this story than is being relayed.

      • fluoridation 23 minutes ago
        "I'm glad this happened with Anthropic instead of Google, which provides Gemini, email, etc. or I would have been locked out of the actually important non-AI services as well."

        Non-disabled organization = the first party provider

        Disabled organization = me

        I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.

      • dragonwriter 25 minutes ago
        The excerpt you don’t understand is saying that if it has been Google rather than Anthropic, the blast radius of the no-explanation account nuking would have been much greater.

        It’s written deliberately elliptically for humorous effect (which, sure, will probably fall flat for a lot of people), but the reference is unmistakable.

    • alistairSH 1 hour ago
      You're not alone.

      I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...

      • Romario77 29 minutes ago
        One Claude agent told other Claude agent via CLAUDE.md to do things certain way.

        The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.

        And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.

      • falloutx 31 minutes ago
        This tracks with Anthropic, they are actively hostile to security researchers.
      • redeeman 42 minutes ago
        i have no idea what he was actually doing either, and what exactly is it one isnt allowed to use claude to do?
      • lazyfanatic42 49 minutes ago
        Author really comes off unhinged throughout the article to be frank.
        • superb_dev 46 minutes ago
          Did we read the same article? The author comes of as pretty frustrated but not unhinged
          • ryandrake 19 minutes ago
            I wouldn't say "unhinged" either, but maybe just struggling to organize and express thoughts clearly in writing. "Organizations of late capitalism, unite"?
        • pjbeam 43 minutes ago
          My take was more a kind of amusing laughing-through-frustration but also enjoying the ride just a little bit insouciance. Tastes vary of course, but I enjoyed the author's tone and pacing.
        • staticman2 32 minutes ago
          Author thinks he's cute to do things like mention Google without typing Google but I wouldn't call him unhinged.
      • rvba 46 minutes ago
        What is wrong with circular prompt injection?

        The "disabled organization" looks like a sarcastic comment on the crappy error code the author got when banned.

    • superb_dev 47 minutes ago
      The author was using instance A of Claude to update a `claude.md` while another instance B of Claude was consuming that file. When Claude B did something wrong, the author asked Claude A to update the `claude.md` so that Claude B didn’t make the same mistake again
      • Aurornis 31 minutes ago
        More likely explanation: Their account was closed for some other reason, but it went into effect as they were trying this. They assumed the last thing they were doing triggered the ban.
        • tstrimple 12 minutes ago
          This does sound sus. I have CC update other project's claude.md files all the time. I've got a game engine that I'm tinkering with. The engine and each of the game concepts I play around with have their own claude.md. The purpose of writing the games is to enhance the engine, so the games have to be familiar with the engine and often engine features come from the game CC rather than the engine CC. To keep the engine CC from becoming "lost" about features implemented each game project has instructions to update the engine's claude.md when adding / updating features. The engine CC bootstraps new game projects with a claude.md file instructing it how to keep the engine in sync with game changes as well as details of what that particular game is designed to test or implement within the engine. All sorts of projects writing to other project's claude.md files.
      • raincole 32 minutes ago
        Which shouldn't be bannable imo. Rate throttle is a more reasonable response. But Anthropic didn't reply to the author, so we don't even know if it's the real reason they got banned.
    • ankit219 21 minutes ago
      My rudimentary guess is this. When you write in all caps, it triggers sort of a alert at Anthropic, especially as an attempt to hijack system prompt. When one claude was writing to other, it resorted to all caps, which triggered the alert, and then the context was instructing the model to do something (which likely would be similar to a prompt injection attack) and that triggered the ban. not just caps part, but that in combination of trying to change the system characteristics of claude. OP does not know much better because it seems he wasn't closely watching what claude was writing to other file.

      if this is true, the learning is opus 4.5 can hijack system prompts of other models.

      • kstenerud 3 minutes ago
        > When you write in all caps, it triggers sort of a alert at Anthropic

        I find this confusing. Why would writing in all caps trigger an alert? What danger does caps incur? Does writing in caps make a prompt injection more likely to succeed?

    • exitb 45 minutes ago
      Normally you can customize the agents behavior via a CLAUDE.md file. OP automated that process by having another agent customize the first agent. The customizer agent got pushy, the customized agent got offended, OP got banned.
    • tobyhinloopen 1 hour ago
      I had to read it twice as well, I was so confused hah. I’m still confused
      • rtkwe 50 minutes ago
        They probably organize individual accounts the same as organization accounts for larger groups of users at the same company internally since it all rolls up to one billing. That's my first pass guess at least.
    • anigbrowl 50 minutes ago
      Agreed, I found this rather incoherent and seeming to depend on knowing a lot more about author's project/background.
    • Romario77 32 minutes ago
      You are confused because the message from Claude is confusing. Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.
      • dragonwriter 20 minutes ago
        > Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

        Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)

        • ryandrake 6 minutes ago
          I've always kind of hated that anti-pattern in other software I use for peronal/hobby purposes, too. "What is your company name? [required]" I don't have a company! I'm just playing around with your tool on my own! I'm not an organization!
    • cr3ative 49 minutes ago
      Right. This is almost unreadable. There are words, but the author seems to be too far down a rabbit hole to communicate the problem properly…
  • pavel_lishin 1 hour ago
    They don't actually know this is why they were banned:

    > My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

    > Or I don't know. This is all just a guess from me.

    And no response from support.

  • jordemort 8 minutes ago
    Forget the ethical or environmental concerns, I don't want to mess with LLMs because it seems like everyone who goes heavy on them ends up sounding like they're on the verge of cracking up.
  • preinheimer 1 hour ago
    > AI moderation is currently a "black box" that prioritizes safety over accuracy to an extreme degree.

    I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.

    • munk-a 33 minutes ago
      You say that - and yet it has successfully guarded Elon from any of those pesky truths that might harm his fervently held beliefs. You just forgot to consider that Grok is a tool that prioritizes Elon's emotional safety over all other safeties.
  • writeslowly 25 minutes ago
    I've triggered similar conversation level safety blocks on a personal Claude account by using an instance of Deepseek to feed in Claude output and then create instructions that would be copied back over to Claude (there wasn't any real utility to this, it was just an experiment). Which sounds kind of similar to this. I couldn't understand what the heuristic was trying to guard against, but I think it's related to concerns about prompt injections and users impersonating Claude responses. I'm also surprised the same safeguards would exist in either the API or coding subscription.
  • ipaddr 56 minutes ago
    You are lucky they refunded you. Imagine they didn't ban you and you continued to pay 220 a month.

    I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.

    For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.

    • bee_rider 40 minutes ago
      LLMs are kind of fun to play with (this is a website for nerds, who among us doesn’t find a computer that talks back kind of fun), but I don’t really understand why people pay for these hosted versions. While the tech is still nascent, why not do a local install and learn how everything works?
      • causalmodels 21 minutes ago
        Because my local is a laptop and doesn't have a GPU cluster or TPU pod attached to it.
  • onraglanroad 37 minutes ago
    So you have two AIs. Let's call them Claude and Hal. Whenever Claude gets something wrong, Hal is shown what went wrong and asked to rewrite the claude.md prompt to get Claude to do it right. Eventually Hal starts shouting at Claude.

    Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.

    (Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)

    • gpm 22 minutes ago
      I assume old failures aren't kept in the context window at all, for the simple reason that the context window isn't that big.
    • staticman2 28 minutes ago
      I don't think it's inevitable often the AI will just keep looping again and again. It can happily without frustration loop forever.
  • tobyhinloopen 1 hour ago
    So you were generating and evaluating the performance of your CLAUDE.md files? And you got banned for it?
    • alistairSH 57 minutes ago
      It reads like he had a circular prompt process running, where multiple instances of Claude were solving problems, feeding results to each other, and possibly updating each other's control files?
      • epolanski 53 minutes ago
        What would be bad in that?

        Writing the best possible specs for these agents seems the most productive goal they could achieve.

        • NitpickLawyer 34 minutes ago
          I think the idea is fine, but what might end up happening is that one agent gets unhinged and "asks" another agent to do more and more crazy stuff, and they get in a loop where everything gets flagged. Remember that "bots configured to add a book at +0.01$ on amazon, reached 1M$ for the book" a while ago. Kinda like that, but with prompts.
          • epolanski 31 minutes ago
            I still don't get it, get your models better for this far fetched case, don't ban users for a legitimate use case.
      • andrelaszlo 34 minutes ago
        Could anyone explain to me what the problem is with this? I thought I was fairly up to date on these things, but this was a surprise to me. I see the sibling comment getting downvoted but I promise I'm asking this in good faith, even if it might seem like a silly question (?) for some reason.
    • Aurornis 30 minutes ago
      I think it's more likely that their account was disabled for other reasons, but they blamed the last thing they were doing before the account was closed.
  • quantum_state 29 minutes ago
    Is it time to move to open source and run model locally with an DGX Spark?
    • blindriver 25 minutes ago
      Every single open source model I've used is nowhere close to as good as the big AI companies. They are about 2 years behind or more and unreliable. I'm using the large parameters ones on a 512GB Mac Studio and the results are still poor.
  • blindriver 26 minutes ago
    There needs to be a law that prevents companies from simply banning you, especially when it's an important company. There should be an explanation and they shouldn't be allowed to hide behind some veil. There should be a real process with real humans that allow for appeals etc instead of scripts and bots and automated replies.
  • rsync 32 minutes ago
    You mean the throwaway pseudonym you signed up with was banned, right?

    right ?

  • languagehacker 54 minutes ago
    Thinking 220GBP for a high-limit Claude account is the kind of thinking that really takes for granted the amount of compute power being used by these services. That's WITH the "spending other people's money" discount that most new companies start folks off with. The fact that so many are painfully ignorant of the true externalities of these technologies and their real price never ceases to amaze me.
    • rtkwe 29 minutes ago
      That's the problem with all the LLM based AI's the cost to run them is huge compared to what people actually feel they're worth based on what they're able to do and the gap seems pretty large between the two imo.
  • f311a 27 minutes ago
    Why are so many people so obsessed with feeding as many prompts/data as possible to LLMs and generating millions of lines of code?

    What are you gonna do with the results that are usually slop?

  • oasisbob 1 hour ago
    > Like a lot of my peers I was using claude code CLI regularly and trying to understand how far I could go with it on my personal projects. Going wild, with ideas and approaches to code I can now try and validate at a very fast pace. Run it inside tmux and let it do the work while I went on to do something else

    This blog post could have been a tweet.

    I'm so so so tired of reading this style of writing.

    • LPisGood 53 minutes ago
      What about the style are you bothered by? The content seems to be nothing new, so maybe that is the issue, but the style itself seems fine, no?
    • red_hare 58 minutes ago
      Alas, the 2016 tweet is the 2026 blog post prompt.
  • heliumtera 41 minutes ago
    Well at least they didn't email the press and called the FBI on you?
  • kmeisthax 21 minutes ago
    Another instance of "Risk Department Maoism".

    If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."

    Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.

    Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.

  • lukashahnart 22 minutes ago
    > I got my €220 back (ouch that's a lot of money for this kind of service, thanks capitalism).

    I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.

    Isn't that the point of capitalism?

  • lifetimerubyist 1 hour ago
    bow down to our new overlords - dont' like it? banned, with no recourse - enjoy getting left behind, welcome to the future old man
    • properbrew 1 hour ago
      I didn't even get to send 1 prompt to Claude and my "account has been disabled after an automatic review of your recent activities" back in 2024, still blocked.

      Even filled in the appeal form, never got anything back.

      Still to this day don't know why I was banned, have never been able to use any Claude stuff. It's a big reason I'm a fan of local LLMs. They'll never be SOTA level, but at least they'll keep chugging along.

      • codazoda 30 minutes ago
        Since you were forced, are you getting good results from them?

        I’ve experimented, and I like them when I’m on an airplane or away from wifi, but they don’t work anywhere near as well as Claude code, Codex CLI, or Gemini CLI.

        Then again, I haven’t found a workable CLI with tool and MCP support that I could use in the same way.

        Edit: I was also trying local models I could run on my own MacBook Air. Those are a lot more limited than something like a larger Llama3 in some cloud provider. I hadn’t done that yet.

      • falloutx 29 minutes ago
        you are never gonna hear back from Anthropic, they don't have any support. They are a company who feels like their model is AGI now they dont need humans except when it comes to paying.
      • anothereng 54 minutes ago
        just use a different email or something
        • ggoo 45 minutes ago
          This happened to me too, you need a phone number unfortunately
    • lazyfanatic42 46 minutes ago
      this has been true for a long long time, there is a rarely any recourse against any technology company, most of them don't even have Support anymore.
  • wetpaws 56 minutes ago
    [dead]
  • jsksdkldld 59 minutes ago
    [dead]
  • moomoo11 57 minutes ago
    Just stop using Anthropic. Claude Code is crap because they keep putting in dumb limits for Opus.
  • jitl 20 minutes ago
    I always take these sorts of "oh no I was banned while doing something innocent" posts with a large helping of salt. At least the ones where someone is complaining about a ban from Stripe, usually it turns out they are doing something that either violates the terms of service or is actually fraudulent. None the less its quite frustrating dealing with these because either way.
  • red_hare 1 hour ago
    This feels... reasonable? You're in their shop (Opus 4.5) and they can kick you out without cause.

    But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.

    • mrweasel 46 minutes ago
      Sure, but it also guarantees that people will think twice about buying their service. Support should have reached out and informed them about whatever they did wrong, but I can't say that I'm surprised that an AI company wouldn't have an real support.

      I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.

    • viccis 40 minutes ago
      Not sure what your point is. They have the right to kick OP out. OP has the right to post about it. We have a right to make decisions on what service to use based on posts like these.

      Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."