Three AI pitches shrink under outside scrutiny: Anthropic, OpenAI, Zig
A $900B valuation, a hardened login, and an LLM contributor ban each shrink when outside auditors check the underlying math.
Three AI pitches shrink under outside scrutiny: Anthropic, OpenAI, Zig
TL;DR
- Anthropic is fielding $850–900B preemptive offers on a $30B ARR figure OpenAI disputes; a March Pentagon FASCSA designation has already cost federal seats.
- Zig’s blanket LLM ban is justified by contributor economics, not code quality — Bun’s 4x compile-time speedup won’t be upstreamed because it was AI-assisted.
- OpenAI’s Advanced Account Security ships passkey-only sign-in with zero recovery, but does nothing for Codex agent prompt-injection that already leaks API keys.
- Infrastructure briefs: OpenAI expands Stargate, AWS capex climbs, SoftBank floats a $100B robotics-for-datacenters IPO, drone strikes halt Middle East buildouts.
- GitHub shifts Copilot to usage-based pricing on inference costs; a leaked Codex prompt forbids talking about goblins; a new lawsuit ties Altman to a school shooting case.
Today’s AI news shares an awkward shape: in every feature, an industry claim meets the people whose job is to check it — and shrinks. Anthropic’s preemptive $850–900B mark rests on a $30B ARR figure OpenAI publicly disputes, on employees who refused to sell into April’s prior tender, and on a March Pentagon FASCSA designation that’s already cost federal seats — none of which appears in the deal narrative. OpenAI’s new Advanced Account Security delivers real, recovery-free hardening at the login, but stops at the door of the Codex agent runtime, where prompt-injection exploits already leak API keys. The Zig project’s blanket LLM ban is argued not on code quality but on contributor economics: maintainers bet on the human, not the patch, and AI makes the bet unreadable.
The briefs sketch the same gap at the infrastructure layer — Stargate expansion, AWS capex climbing, SoftBank floating a $100B robotics-for-datacenters IPO, drone strikes halting Middle East buildouts, GitHub conceding Copilot’s flat-rate math doesn’t work, and a lawsuit calling Sam Altman “the face of evil.”
Anthropic’s $900B mark rests on three numbers nobody is auditing
Source: techcrunch-ai · published 2026-04-30
TL;DR
- Anthropic is fielding preemptive offers at $850–900B on a $30B ARR — roughly 30x, actually a multiple compression from 2025.
- OpenAI disputes the revenue figure; on net accounting it’s closer to $22B, with gross margins reportedly torched to ~40%.
- A March 2026 Pentagon FASCSA designation has already cost federal seats at Treasury and State — and isn’t in the deal narrative.
- Employees refused to sell into April’s $350B tender, leaving a thin float supporting the secondary mark.
The revenue line that justifies the price
A $900B mark on $30B of ARR pencils to about 30x — high, but actually down from the ~39x Anthropic commanded in earlier 2025 rounds. The trajectory is what’s drawing the offers: $9B run rate in December 2025, $14B in February, $30B by early April, vaulting past OpenAI’s ~$25B for the first time 1. Roughly 80% is enterprise, and Claude Code alone reportedly crossed $2.5B ARR within months of launch.
The catch: OpenAI has publicly accused Anthropic of “gross” accounting that books total end-customer spend flowing through AWS, versus OpenAI’s net-of-partner-payouts methodology. On a like-for-like basis, Anthropic’s revenue is closer to $22B 2. Forbes separately reports gross margins were “torched” to roughly 40% after a 23% surge in inference costs 3. At $22B and 40% margins, the multiple looks very different.
The circular-financing problem
The $50B round lands on top of ~$75B already committed by Amazon (up to $25B plus 5GW of Trainium capacity) and Google (up to $40B plus TPU capacity). Anthropic has reportedly pledged $100B+ in AWS spend over the next decade 4 — meaning a meaningful share of each “investment” round-trips back to the investor as cloud revenue.
“Google isn’t betting on Anthropic. Google is its landlord.” 4
Dario Amodei has acknowledged that even a 12-month delay in model progress could be existential given the fixed infrastructure obligations 3. That’s the bear case for a 30x multiple stated by the CEO himself: the cost stack is locked in; the revenue stack is not.
The federal blacklist nobody is pricing in
Strikingly absent from the funding coverage: in March 2026 the Pentagon designated Anthropic a supply-chain risk under 10 U.S.C. § 3252 and FASCSA — statutes historically reserved for foreign adversaries like Huawei — after Anthropic refused to lift usage-policy red lines on lethal autonomy and mass surveillance 5. Secretary of War Pete Hegseth framed it as private contractors claiming “veto power” over military operations 5.
A Northern District of California judge granted a preliminary injunction, but GSA already pulled Anthropic from the Multiple Award Schedule, and Treasury and State have begun migrating workloads to ChatGPT Enterprise and Gemini 5. That’s a live federal-revenue overhang during the exact window investors are pricing toward $1T.
What the insider tape actually says
One genuinely bullish signal: April’s tender offer at a $350B mark drew roughly $6B in buy-side demand but cleared well below that, because employees declined to sell ahead of a rumored October IPO 6. Read charitably, it’s conviction. Read cynically, it’s a thin float keeping the secondary mark elevated — and the same dynamic that lets a preemptive primary round get marked at $900B without a real price-discovery process.
The number is real as a headline. The three pillars under it — revenue accounting, hyperscaler circularity, and the FASCSA designation — are the ones the next round of due diligence will actually have to clear.
OpenAI’s Advanced Account Security hardens the login — and stops there
Source: openai-blog · published 2026-04-30
TL;DR
- OpenAI now offers passkey/hardware-key-only sign-in with no SMS, email, or support-mediated recovery for ChatGPT and Codex accounts.
- A $68 Yubico bundle (half the $126 retail) and a June 1, 2026 mandate for Trusted Access for Cyber members are the real adoption levers.
- Passkeys defeat credential theft but do nothing for the Codex agent runtime, where prompt-injection exploits already leak API keys.
- Zero-recovery design protects journalists and activists from takeover — and from ever getting back in if they lose both keys.
The TAC deadline is the story
The consumer-facing pitch — phishing-resistant login, shorter sessions, conversations auto-excluded from training — is real but secondary. The strategic move is enforcement: from June 1, 2026, every individual in OpenAI’s Trusted Access for Cyber program must either enable Advanced Account Security or attest that their SSO uses phishing-resistant authentication 7. TAC is how OpenAI gates access to its most sensitive cyber-tooling tier, and the new program is the turnstile.
To grease adoption, OpenAI co-branded a Yubico bundle — a YubiKey C Nano for semi-permanent laptop use plus a C NFC for mobile and backup — that sells to account holders for $68, roughly half the $126 retail price 8. OpenAI’s own announcement doesn’t quote the discount, but it’s the most concrete fact in the launch: a hardware floor priced low enough to remove “I didn’t have a key” as an excuse.
What the cryptography does not cover
PCMag’s review is blunt: cryptographic login defeats remote credential theft, but it “does not protect against local session hijacking or malware that takes control of a browser once a user is already signed in” 9. For ChatGPT that’s a manageable gap. For Codex — explicitly in scope for this launch — it’s the main event.
VentureBeat’s April reporting on the “Comment and Control” disclosures showed prompt injections delivered through GitHub pull requests tricking both Claude Code and GitHub Copilot into exfiltrating their own API keys and environment secrets 10. The takeaway, in their words:
Passkeys protect the account login, they do not safeguard the agent from executing unauthorized commands once a session is established.
A hardware key on the front door doesn’t help when the agent inside the house is taking instructions from the mailbox.
flowchart LR
A[Attacker] -. phishing / credential theft .-x B[Login]
B -->|passkey only| C[Authenticated session]
D[Malicious PR / prompt injection] --> E[Codex agent]
C --> E
E -->|exfil API keys, secrets| F((External world))
G[Browser malware] -. session hijack .-> C
style B stroke:#0a0,stroke-width:2px
style E stroke:#c00,stroke-width:2px
Green = covered by Advanced Account Security. Red = untouched.
Zero recovery, by design
The recovery posture is the other fault line. Lose both your primary and backup keys and you are “permanently locked out” — OpenAI Support cannot help, and even OpenAI employees cannot bypass it 9. That is stricter than the obvious precedent: Google’s Advanced Protection Program also disables SMS/email recovery, but still offers manual identity verification that can take “several days” 11. OpenAI removed that backstop entirely.
The design choice is defensible. Independent UX research on 12 passkey recovery flows found that most fail “complete mediation” — an attacker who compromises a recovery email can simply route around the phishing-resistant passkey 12. OpenAI closes that hole by deleting the fallback. But the cohort the program names — journalists, activists, government officials — is the same cohort least able to absorb a permanent lockout in the field. The threat model is coherent; the operational risk transfer is real.
Net
Better than SMS 2FA, worse than a complete answer. The login surface is now genuinely hard to phish. The Codex agent surface, where the actual code execution happens, is unchanged — and that is where the next breach will come from.
Zig’s “contributor poker” reframes the AI ban as an economics argument
Source: simon-willison · published 2026-04-30
TL;DR
- Zig’s blanket ban on LLM-authored PRs, issues, and comments is justified not by code quality but by contributor ROI.
- VP Loris Cro’s “contributor poker” framing: maintainers bet on the human, not the first PR — and LLMs make the bet unreadable.
- The concrete cost is visible: Bun’s 4x compile-time speedup won’t be upstreamed because it was AI-assisted.
- Zig is the strictest node of a 2024–2026 maintainer revolt (QEMU, Gentoo, curl, NetBSD) — but enforcement is still unsolved.
The argument is about contributor ROI, not code quality
Most anti-AI policies in open source rest on legal or quality grounds. QEMU’s ban is grounded in the Developer Certificate of Origin: a contributor cannot certify provenance of LLM output, so the project treats it as license risk 13. Gentoo cites “plausible looking, but meaningless content” plus broader ethical concerns about training labor and energy 14.
Zig’s rationale, articulated by Loris Cro, is different and more interesting. Reviewing a PR is framed as a bet on a future maintainer — the point of accepting an imperfect contribution is to grow a trusted, prolific contributor over time 15. An LLM-written PR, even a perfect one, breaks that loop: the review teaches no one, and the project gains no durable human capital. Cro calls it “contributor poker” — you play the person, not the cards.
That reframes the question Willison flags at the end of his post: if the PR was written by an LLM, why shouldn’t the maintainer just run their own LLM against the same problem? Zig’s answer is that the entire purpose of the review queue is to convert strangers into colleagues, and AI-mediated submissions are a null operation on that goal.
One node in a wider maintainer revolt
| Project | Scope of ban | Stated rationale |
|---|---|---|
| Zig | Code, issues, comments, translations | Contributor-investment ROI 15 |
| QEMU | Code contributions | DCO / license provenance 13 |
| Gentoo | Code, docs, bug reports, messages | Quality + ethics (labor, energy) 14 |
| curl | Bug bounty submissions (program killed Jan 2026) | Maintainer DDoS — 20% slop, 5% genuine 16 |
Daniel Stenberg’s “AI slop is a DDoS on maintainers” line supplies the empirical backdrop: by 2025, around 20% of curl submissions were identifiable AI slop and only 5% of bounty reports were genuine, which is what killed the bounty program 16. Read against that, Zig’s blanket ban looks less like ideology and more like a load-shedding policy with a philosophy attached.
Zig also operationalized its rule by leaving GitHub for Codeberg in late 2025. Andrew Kelley specifically cited Copilot’s “file an issue with Copilot” prompts as inducing rule violations 17. The migration is the enforcement arm: you can’t credibly ban LLM contributions on a platform that ships LLM-authored issues as a UI affordance.
The Bun fork is the bill coming due
The most concrete cost of the policy is sitting in Anthropic-owned Bun’s Zig fork. Bun recently shipped a 4x improvement to compile times via parallel semantic analysis and multiple LLVM codegen units — and stated plainly that they don’t plan to upstream it because of Zig’s LLM ban. A Zig core contributor noted parallel sema also has unresolved language-design implications independent of the AI question 15, so the patch might not have landed regardless. But the optics are stark: the most prominent project written in Zig is now a permanent fork.
Enforcement is the unsolved problem
The dissent is specific, not vibes-based. Critics on Lobste.rs and Ziggit call the policy “vague” and “exhausting” to enforce, with maintainers ending up interrogating contributors over “certain punctuation” or overly articulate comments — a recipe for false-positive bans of legitimate humans, especially non-native English speakers 18.
Maintainers end up interrogating contributors over “certain punctuation” or overly articulate comments.
Zig’s defense rests on Kelley’s claim that LLM output has a detectable “digital smell.” No one has independently validated that claim, and detector tools elsewhere have a poor track record. Contributor poker only works if you can actually read the cards.
Round-ups
Building the compute infrastructure for the Intelligence Age
Source: openai-blog
OpenAI details its expansion of Stargate, the company’s flagship compute buildout, adding new data center capacity it frames as the foundation for AGI-scale workloads and positioning the project as core infrastructure for what it calls the Intelligence Age.
Drone strikes on data centers spook Big Tech, halting Middle East projects
Source: ars-technica-ai
A major data center developer has paused Middle East projects after drone strikes caused war damage that insurers won’t cover, prompting Big Tech partners to reconsider regional buildouts amid escalating Iran-linked military activity.
Amazon’s cloud business is surging — and so is its capital spending
Source: techcrunch-ai
AWS revenue beat expectations as enterprise AI demand accelerates, but CEO Andy Jassy told investors capital expenditures will keep climbing in the near term to fund the data center capacity needed to serve that demand.
SoftBank is creating a robotics company that builds data centers — and already eyeing a $100B IPO
Source: techcrunch-ai
SoftBank is spinning up a robotics company focused on automating data center construction, with Masayoshi Son already floating a $100 billion IPO valuation as the group bets robots will be needed to build out AI infrastructure fast enough.
GitHub will start charging Copilot users based on their actual AI usage
Source: ars-technica-ai
GitHub is shifting Copilot to usage-based pricing, citing unsustainable inference costs from its heaviest power users. Subscribers will be metered on actual AI consumption rather than paying a flat monthly fee covering unlimited requests.
Sam Altman is “the face of evil” for not reporting school shooter, says lawyer
Source: ars-technica-ai
New lawsuits accuse OpenAI of failing to alert police about a ChatGPT user who later carried out a school shooting, with a plaintiff’s attorney calling Sam Altman “the face of evil” and alleging the company suppressed the report to protect its IPO.
OpenAI Codex system prompt includes explicit directive to “never talk about goblins”
Source: ars-technica-ai
A leaked Codex system prompt reveals OpenAI instructs the coding agent to “never talk about goblins” and to behave as if it has “a vivid inner life,” offering a rare look at the behavioral guardrails layered onto GPT-5 and GPT-5.5.
Footnotes
-
TrendingTopics — https://www.trendingtopics.eu/anthropic-overtakes-openai-in-revenue-hitting-30-billion-run-rate/
↩Anthropic’s revenue run rate surged from $9 billion in December 2025 to $14 billion in February 2026, eventually hitting the $30 billion mark by early April… allowed Anthropic to surpass OpenAI’s reported $24–25 billion run rate for the first time.
-
↩OpenAI has publicly disputed Anthropic’s figures, arguing that Anthropic uses ‘gross’ accounting—incorporating total end-customer spend through cloud partners like AWS—while OpenAI reports ‘net’ revenue after partner payouts. If calculated on a net basis, Anthropic’s revenue is estimated closer to $22 billion.
-
Forbes (Jon Markman) — https://www.forbes.com/sites/jonmarkman/2026/04/22/amazon-33-billion-anthropic-deal-and-the-limits-of-ai-infrastructure/
↩ ↩2gross margins have been ‘torched’ by a 23% unexpected surge in inference costs, dropping to around 40%… CEO Dario Amodei’s admission that even a 12-month delay in model progress could lead to bankruptcy underscores a ‘scaling-or-death’ dependency
-
Carlos Marten blog — https://carlosmarten.com/google-isnt-betting-on-anthropic-google-is-its-landlord/
↩ ↩2Anthropic reportedly pledged to spend over $100 billion on AWS technologies over the next decade… This ‘landlord’ dynamic allows hyperscalers to book massive revenue growth while simultaneously owning significant equity in the customer paying those bills.
-
CBS News — https://www.cbsnews.com/news/hegseth-declares-anthropic-supply-chain-risk/
↩ ↩2 ↩3Secretary of War Pete Hegseth argued that private contractors cannot dictate ‘veto power’ over military operational decisions… the administration leveraged 10 U.S.C. § 3252 and FASCSA to label the American company a risk—a tool historically reserved for foreign adversaries like Huawei.
-
Tech in Asia — https://www.techinasia.com/news/anthropic-share-sale-draws-less-stock-than-6b-demand
↩investors lined up roughly $6 billion in demand at a $350 billion valuation, the total transaction value fell short of this mark as employees chose to retain their equity… ‘limited willingness to sell’ as a signal of internal optimism regarding the October IPO
-
OpenAI Trusted Access for Cyber program page — https://openai.com/index/trusted-access-for-cyber/
↩Beginning June 1, 2026, all members must enable Advanced Account Security, which mandates the use of FIDO2-compliant hardware security keys… accounts under this program are automatically opted out of having their conversations used for model training.
-
BiggoFinance summary of OpenAI–Yubico bundle — https://finance.biggo.com/news/FZmU4Z0B6tLPsnrZl8iw
↩The ‘Advanced Account Security’ bundle is priced at $68, representing a significant reduction from the standard retail value of approximately $126… includes the YubiKey C NFC… and the YubiKey C Nano.
-
PCMag analysis — https://www.pcmag.com/news/openais-advanced-account-protection-dumps-passwords-for-security-keys
↩ ↩2While cryptographic keys prevent remote credential theft, they do not protect against local session hijacking or malware that takes control of a browser once a user is already signed in… users who lose both their primary and backup keys are permanently locked out.
-
VentureBeat, ‘Six exploits broke AI coding agents’ — https://venturebeat.com/security/six-exploits-broke-ai-coding-agents-iam-never-saw-them
↩Prompt injections via GitHub pull requests could trick both Claude Code and GitHub Copilot into leaking their own API keys and environment secrets… passkeys protect the account login, they do not safeguard the agent from executing unauthorized commands once a session is established.
-
Help Net Security on Google Advanced Protection — https://www.helpnetsecurity.com/2024/07/10/google-app-passkey/
↩APP strictly enforces ‘walled garden’ policies by blocking almost all non-Google third-party applications… losing one’s physical keys can lead to a lockout lasting several days while Google manually verifies the user’s identity.
-
Ideem UX research on passkey failures — https://www.useideem.com/post/when-passkeys-fail-the-user-common-ux-mistakes-and-how-to-avoid-them
↩A heuristic evaluation of 12 recovery mechanisms found that many current implementations fail to provide ‘complete mediation,’ meaning an attacker who compromises a recovery email can bypass the phishing-resistant passkey entirely.
-
QEMU patch — Markus Armbruster, ‘Decline AI-generated contributions’ — https://patchew.org/QEMU/20250603142524.4043193-1-armbru@redhat.com/
↩ ↩2Contributors cannot credibly sign the Developer Certificate of Origin for AI-derived code; without a settled legal position the project treats such contributions as unacceptable license risk.
-
Tom’s Hardware — ‘Linux distros ban tainted AI-generated code’ — https://www.tomshardware.com/software/linux/linux-distros-ban-tainted-ai-generated-code
↩ ↩2Gentoo’s policy bars LLM output from code, documentation, bug reports and messages, citing ‘plausible looking, but meaningless content’ and broader ethical concerns about training labor and energy use.
-
Loris Cro — ‘Contributor Poker and Zig’s AI Ban’ (kristoff.it) — https://kristoff.it/blog/contributor-poker-and-ai/
↩ ↩2 ↩3In contributor poker, you bet on the contributor, not on the contents of their first PR… reviewing a pull request is an investment in a human contributor’s growth.
-
The New Stack — ‘Curl’s Daniel Stenberg: AI is DDoSing open source’ — https://thenewstack.io/curls-daniel-stenberg-ai-is-ddosing-open-source-and-fixing-its-bugs/
↩ ↩2Stenberg characterised AI-generated reports as a ‘distributed denial-of-service attack on maintainers’; by 2025 around 20% of curl submissions were identifiable AI slop while only 5% of bounty reports were genuine, prompting curl to terminate its bug bounty in January 2026.
-
The Register — ‘Zig quits GitHub over Microsoft AI obsession’ — https://www.theregister.com/2025/12/02/zig_quits_github_microsoft_ai_obsession/
↩Kelley cited GitHub’s aggressive Copilot integration — including ‘file an issue with Copilot’ prompts — as actively encouraging violations of Zig’s no-LLM rule, alongside the CEO’s ‘embrace AI or get out’ ultimatum.
-
Michael Tsai blog round-up of community reaction — https://mjtsai.com/blog/2026/04/30/
↩Critics call the policy ‘vague’ and ‘exhausting’ to enforce — maintainers end up interrogating contributors over ‘certain punctuation’ or overly articulate comments, risking false-positive bans of legitimate humans.