Ask HN: What's the latest concensus on OpenAI vs. Anthropic $20/month tier?

I'm considering $20/month variants only.

I've had a Claude subscription for the past year, although I only really started properly using LLMs in the past couple of months. With Opus, I get about 5 messages every 5 hours (fairly small codebase); more with Sonnet. I then cancelled that, since its practically unusable and got ChatGPT sub about a week ago. Currently using it with 5.4 High and I haven't had to worry about limits. But the code it produces is definitely "different" and I need to plan more in advance. Its plan mode is also not as precise as with Claude (it doesn't lay out method stubs it plans to implement etc) so I suppose I may need to change how I work with it? Lastly, for normal chats it produces significantly more verbose output (with personality set to Efficient) and fast (with Thinking) but often it feels as though its not as thorough as I'd like it to be.

My question; is this a "you're holding it wrong" type of situation, where I just need to get used to a different mode of interaction? Or are others noticing material difference in quality? Ideally I'd like to stick with ChatGPT due to borderline impractical limits with Anthropic.

10 points | by whatarethembits 1 day ago

8 comments

  • Areena_28 6 hours ago
    I know with Claude, hitting 5 messages every 5 hours mid-task is a real workflow problem.

    Many times, when i used to hit claude limits mid task and switched to ChatGPT thinking the no-limit thing would make up for it. But it's really annoying.

    I ended up just being more deliberate about how i use Claude. longer more complete prompts instead of back and forth, which naturally stretches the messages further. not a perfect fix but it changed the experience enough to stick with it.

  • kasey_junk 14 hours ago
    I think it really depends on how fully formed you ai workflows are. I have a very opinionated set of skills and agents files and a harness for running prompts against both for code production.

    I do head to head comparisons with this setup pretty regularly and what I’ve found is there is not much difference in outcomes between the 2 frontier labs at equivalent model settings. It’s hard to get statistically significant results on my budget and eval ability but my anecdotal feeling is that there is as much difference in group as out in outcomes.

    Given that setup I use codex much more than Claude because it’s more reliable.

    But I believe it’s easier to go from nothing to decent with Claude.

    For other stuff I use Claude.

  • jaysethi 20 hours ago
    I find Claude to be more opinionated. Their heavy focus on alignment really helps it embody a personality. More of this in their [system card](https://www-cdn.anthropic.com/6d8a8055020700718b0c49369f6081...)
  • 01jonny01 13 hours ago
    Claude is good for producing 1 shot polished apps, but you will quickly burn through your allowance.

    Chatgpt needs more prompting to get what you want, but its nearly impossible to reach your limit.

  • pcael 1 day ago
    Have you tried Claude console client?
    • whatarethembits 1 day ago
      Do you mean Claude Code? If so, that's what I use(d) primarily for development, and Claude Desktop for general chats. My issue with Opus was that, every time I start a new task in Plan mode, it'd use 50k - 100k tokens and that'd by about 20% of the session limit. A bit of back and forth and its done for most of the work day. Just not feasible at all. The tasks I wanted it to perform were fairly small and contained, "Look at these three files @@@ and add xxx to @file. DON'T read any other files. If you need more context, ask me.". That worked sometimes but not always, still burned a lot of tokens.
      • pcael 1 day ago
        Yes I meant Claude Code client. Indeed Opus is a token eater, I usually use Sonnet because or that.
  • khaledh 1 day ago
    I use both at the same time:

    - Claude Opus for general discussion, design, reviews, etc.

    - Codex GPT-5.4 High for task breakdown and implementation.

    I often feed their responses to each other (manual copy/paste) to validate/improve the design and/or implementation. The outcome has been better than using one alone.

    This workflow keeps Claude's usage in check (it doesn't eat as much tokens), and leverages Codex generous usage limits. Although sometimes I run into Codex's weekly limit and I need to purchase additional credits: 1000 credits for $40, which last for another 4-5 days (which usually overlap with my weekly refresh, so not all the credits are used up).

  • truepricehq 22 hours ago
    [dead]
  • Remi_Etien 1 day ago
    [dead]