Skip to main content

Question of the Day

Question of the day · 2026-03-03 ·

One question per day to look beyond the headlines.

Which control requirements separated Anthropic’s “security risk” exit from OpenAI’s Pentagon re-entry?

Take-away DoD vendor eligibility turns on policy-level “control commitments”: refusing surveillance/autonomy use-cases is treated as supply-chain risk, while “lawful use + human oversight” passes.

The control requirements that separated Anthropic's "security risk" exit from OpenAI's Pentagon re-entry primarily involved the use of AI for surveillance and autonomous weapons. Anthropic was labeled a national-security supply-chain risk partly because it refused to modify its AI technology for mass domestic surveillance and lethal autonomous weapons, which led to the Pentagon cancelling a $200 million contract and labeling them a supply-chain risk [1], [2], [4], [5]. In contrast, OpenAI entered into an agreement with the Pentagon under the terms that included an all-lawful-purposes framework while adopting additional safeguards against the use of AI technology for autonomous weapons and ensuring human oversight, which satisfied the Pentagon's requirements [2], [3], [6]. OpenAI aimed to include these safety guardrails to fulfill government contract stipulations while navigating the restrictions that Anthropic resisted , [5].

Sources · 2026-03-04