Skip to main content

Question archive

Each day, we pose a question inspired by the daily news brief and answer it using our database of indexed AI/ML articles. Browse and search past questions below.

2026-03-04

Where does “deploying AI on Pentagon networks” cross into mass surveillance or autonomous weapons enabling?

The real boundary is contractual scope: “on Pentagon networks” stays safer only if contracts bar intel-agency use and mandate human-in-the-loop—memos can delete these guardrails.

2026-03-03

Which control requirements separated Anthropic’s “security risk” exit from OpenAI’s Pentagon re-entry?

DoD vendor eligibility turns on policy-level “control commitments”: refusing surveillance/autonomy use-cases is treated as supply-chain risk, while “lawful use + human oversight” passes.

2026-03-02

How does Musk arguing “safety-first” against OpenAI collide with regulators probing Grok’s nude-image incidents?

“Safety-first” claims collide with regulators because scrutiny targets concrete output classes under GDPR/DSA (nonconsensual nudity, minors), not broad safety rhetoric.

2026-03-01

When a president bans a model, what makes swapping it out of defense workflows take months?

In classified defense stacks, the model is coupled to an accreditation+clearance gate; swapping models means recertifying the whole deployment surface, not just code.

2026-02-28

Why does a $200M Pentagon contract end with Anthropic tagged a “supply chain risk” anyway?

Defense procurement leverages “supply chain risk” as a control point: if a vendor won’t remove safeguards or grant access, the label enables termination without proving foreign ties.

2026-02-27

How does AWS exclusivity for OpenAI’s Frontier platform turn a funding round into infrastructure lock-in?

Exclusivity turns cash into lock‑in by baking AWS-native chips and deployment tooling into Frontier’s control plane, making future scaling and switching costs structural.

2026-02-26

How does the Pentagon’s “best-and-final” offer turn AI ethics into contract-enforceable operational limits?

AI “ethics” becomes enforceable when written as deploy-time configuration duties (disable guardrails, allow uses) with leverage tools like DPA/supply-chain labels.

2026-02-25

Why does Perplexity sell “end-to-end projects” as orchestration of 19 models, not one giant model?

Orchestrating many models turns a project into routed subtasks (reasoning/research/long-context), so capability scales via specialization+parallelism, not model size.

2026-02-24

Why did xAI’s case fail even though two ex‑employees admitted stealing trade secrets?

Trade‑secret claims fail structurally when proof stops at employee theft: liability hinges on linking the new employer’s direction or use, not mere hiring flows.

2026-02-23

What does “MLS-based E2EE for RCS” reveal about who must coordinate before secure iPhone–Android messaging ships?

MLS E2EE for RCS sits in the GSMA Universal Profile carrier layer, so Apple/Google can’t ship it alone—carrier profile rollout gates iPhone–Android encryption.