Looks like this was restored 2 weeks ago[0], 3 days after Anthropic said OpenClaw requires extra usage[1]. At this point, it's hard to take this seriously. No official statement and not even a tweet?
> Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again
Anthropic staff have had contradictive statements in Twitter and have corrected each other. Their intent for clarifications lead to confusion.
> OpenClaw treats Claude CLI reuse and claude -p usage as sanctioned for this integration unless Anthropic publishes a new policy.
Oh cool, so everything is back to business now, until they all or sudden update their policy tomorrow that retracts everything.
Anthropic have proved themselves to be be unreliable when it comes to CC. Switching to other providers is the best way to go, if you want to keep your insanity.
Oh that's interesting. Right after they signed the deal with Amazon so maybe it was all compute constrained. In any case, I tried using the Codex $20/mo plan and the limits are so low I can hardly get anywhere before my agent swaps to a different agent.
Somewhat suspicious that if I do this without an official Anthropic notice I'll lose my precious Max $200/mo account so I'll sit tight perhaps for a while.
I didn't even use openclaw and Anthropic disabled my account without explanation beyond "suspicious signals". If anyone found a way to get out of that, I'd be curious to hear it - genuinely no idea what I did wrong, and the Google docs form I filled out to appeal never got me any reply.
I got sick of the inconsistency caused by Anthropic tinkering with Claude Code and had canceled my 20x. My plan was to switch to Codex so I could use it in Pi.
I am specifically talking about switching because of the harness, not model quality. Anyone else match my experience?
I wonder how many other people recently did the same. It would be prudent of Anthropic to let people use Pro/Max OAuth tokens with other harnesses I think. Even though I get why they want to own the eyeballs.
I’ve been using Codex Pro since they lobotomized Opus 4.6. Codex is so much better, GPT 5.4 xhigh fast is definitely the smartest and fastest model available.
For a while there I had both Opus 4.6 and Codex access and I frequently pitted them against each other, I never once saw Opus come out ahead. Opus was good as a reviewer though, but as an implementer it just felt lazy compared to 5.4 xhigh.
One feature that I haven’t seen discussed that much is how codex has auto-review on tool runs. No longer are you a slave to all or nothing confirmations or endless bugging, it’s such a bad pattern.
Even in a week of heavy duty work and personal use I still haven’t been able to exhaust the usage on the $200 plan.
I’ll probably change my mind when (not IF) OpenAI rug pull, but for spring ‘26, codex is definitely the better deal.
It really depends on what you‘re trying to do and what your skillset is.
But if you go information architecture first and have that codified in some way (espescially if you already have the templates), then you can nudge any agent to go straight into CSS and it will produce something reasonable.
I left anthropic a while ago because of the similar shenanigans they had earlier. I went with opencode & zen.
I still have their subscription, but am using pi now, mainly because something happened that made my opencode sessions unusable (cannot continue them, just blanks out, I assume something in the sqlite is fucked), and I cannot be bothered to debug it.
For what I use the agents, the Chinese models are enough
Doesn't using pi be against their terms of use about having to go through Claude Code cli for all Max plan usage? (I had use Droid with Max previously, it was a great combo).
I also cancelled my 20x and switched to Codex. At this point even the Codex CLI seems to perform better than Claude Code... And so far I'm on the OpenAI Pro plan and haven't even needed to upgrade to their $100/mo plan. I'm getting more value for almost 10x cheaper.
My experience is the opposite of this thread's consensus. Context: Full time SWE, working on large and messy codebase. Not working on crazy automations, working on fixing bugs, troubleshooting crashes, implementing features.
Anthropic models write much better code, they are easy to follow, reasonable and very close to what I've done if I had the time... OpenAI's on the other hand generate extremely complex solutions to the simplest problems.
I was so disappointed by non-Anthropic models, that for a couple of weeks I only used Anthropic models, but based on this thread, I'll go back and give it another try. It's good to go back and try things again every couple of weeks.
Of course, I was annoyed that they lobotomized 4.6, the difference was day and night, and Anthropic is certainly not a company I trust. In my opinion, it shows their willingness to rugpull, so I'm looking at other approaches. Since 4.7, things went back to normal, things you'd expect to work just work.
Because the Harness is the Moat and key IP not the Models themselves that is the why! now for both OpenAI and Anthropic with all their money raised and the compute they acquire and have in the books of course no one can easily replicate, whom can afford all those datacenters and Nvidia GPUs interconnected is why OpenAI throws you a bone and gives you an Open Source SDK Harness but not the one they actually use for ChatGPT. But now both of them have to deliver and do all the bull-shet they said this models can do... truth is they cannot. So now the bubbles burst and we will see what happens. We all have to buy iPhones or MacBooks so that makes sense, we all use Chrome or Google Search, Instagram, TikTok.
All these models and agents are shortcuts for all of us to be lazy and play games and watch YouTube or Netflix because we use them to work-less, well the party will be over soon.
I don’t think I’ve seen a more confused and shambolic product strategy since Google’s absurd line of GChat rebrandings.
Last year I was excited about the constant forward progress on models but since February or so its just been a mess and I want off this ride.
Either way I’m going to wait for “official” word from Anthropic, which I guess at this point will probably be a “Tell HN” or Reddit text post or a Xitter from some random employee’s personal account, because apparently that’s the state of corporate communication now.
Is the tail end of the bubble, is just ridiculous things now. Models cannot made leap-improvements and now you have the enterprise to deal with and for enterprise is not about disruption so you can't break the wheel, you just need to make everyone work less.
But the bills comes thru, one has to pay AWS cause you need the servers, but pay AI agents that make mistake and everyone hopes they work just by typing and saying do x or y. And now they actually invented and engineering and deploy something called Adaptive Thinking and the models can allocate allocate zero reasoning tokens. Its game over, but it was over regardless, there is nothing special about models and they trained them now even with YouTube and soon to be Twitter(X), TikTok and bullshit. Now all those Nvidia GPUs interconnected via NVLink definitely powerful super computers, but the "software" let alone the "AI" is not there yet and OpenAI is worth close to 1 Trillions Dollars ... I mean come on!
The rug-pull risk is why each of my automations has its own model setting, so swapping a given one from Claude to GPT is a dropdown instead of a rebuild.\n\nRuns as a cloud service, so nothing to maintain locally. Built it at atmita.com if useful.
[0]: https://github.com/openclaw/openclaw/commit/d378a504ac17eab2...
[1]: https://news.ycombinator.com/item?id=47633396
Anthropic staff have had contradictive statements in Twitter and have corrected each other. Their intent for clarifications lead to confusion.
> OpenClaw treats Claude CLI reuse and claude -p usage as sanctioned for this integration unless Anthropic publishes a new policy.
Oh cool, so everything is back to business now, until they all or sudden update their policy tomorrow that retracts everything.
Anthropic have proved themselves to be be unreliable when it comes to CC. Switching to other providers is the best way to go, if you want to keep your insanity.
Somewhat suspicious that if I do this without an official Anthropic notice I'll lose my precious Max $200/mo account so I'll sit tight perhaps for a while.
I am specifically talking about switching because of the harness, not model quality. Anyone else match my experience?
I wonder how many other people recently did the same. It would be prudent of Anthropic to let people use Pro/Max OAuth tokens with other harnesses I think. Even though I get why they want to own the eyeballs.
For a while there I had both Opus 4.6 and Codex access and I frequently pitted them against each other, I never once saw Opus come out ahead. Opus was good as a reviewer though, but as an implementer it just felt lazy compared to 5.4 xhigh.
One feature that I haven’t seen discussed that much is how codex has auto-review on tool runs. No longer are you a slave to all or nothing confirmations or endless bugging, it’s such a bad pattern.
Even in a week of heavy duty work and personal use I still haven’t been able to exhaust the usage on the $200 plan.
I’ll probably change my mind when (not IF) OpenAI rug pull, but for spring ‘26, codex is definitely the better deal.
Codex is abysmal for UI design imo.
But if you go information architecture first and have that codified in some way (espescially if you already have the templates), then you can nudge any agent to go straight into CSS and it will produce something reasonable.
I still have their subscription, but am using pi now, mainly because something happened that made my opencode sessions unusable (cannot continue them, just blanks out, I assume something in the sqlite is fucked), and I cannot be bothered to debug it.
For what I use the agents, the Chinese models are enough
Plus I like being able to switch a model.
Had to stop because they don't like us proxying requests anymore.
Anthropic models write much better code, they are easy to follow, reasonable and very close to what I've done if I had the time... OpenAI's on the other hand generate extremely complex solutions to the simplest problems.
I was so disappointed by non-Anthropic models, that for a couple of weeks I only used Anthropic models, but based on this thread, I'll go back and give it another try. It's good to go back and try things again every couple of weeks.
Of course, I was annoyed that they lobotomized 4.6, the difference was day and night, and Anthropic is certainly not a company I trust. In my opinion, it shows their willingness to rugpull, so I'm looking at other approaches. Since 4.7, things went back to normal, things you'd expect to work just work.
All these models and agents are shortcuts for all of us to be lazy and play games and watch YouTube or Netflix because we use them to work-less, well the party will be over soon.
Last year I was excited about the constant forward progress on models but since February or so its just been a mess and I want off this ride.
Either way I’m going to wait for “official” word from Anthropic, which I guess at this point will probably be a “Tell HN” or Reddit text post or a Xitter from some random employee’s personal account, because apparently that’s the state of corporate communication now.
But the bills comes thru, one has to pay AWS cause you need the servers, but pay AI agents that make mistake and everyone hopes they work just by typing and saying do x or y. And now they actually invented and engineering and deploy something called Adaptive Thinking and the models can allocate allocate zero reasoning tokens. Its game over, but it was over regardless, there is nothing special about models and they trained them now even with YouTube and soon to be Twitter(X), TikTok and bullshit. Now all those Nvidia GPUs interconnected via NVLink definitely powerful super computers, but the "software" let alone the "AI" is not there yet and OpenAI is worth close to 1 Trillions Dollars ... I mean come on!
I'm confused by the comments being full of people swearing off Claude, feels like real HN bubble stuff.
Contrast that to what GitHub did which was to pause new customers to ensure quality remained and things were stable.