Skip to main content

Question of the Day

Question of the day · 2026-03-02 ·

One question per day to look beyond the headlines.

How does Musk arguing “safety-first” against OpenAI collide with regulators probing Grok’s nude-image incidents?

Take-away “Safety-first” claims collide with regulators because scrutiny targets concrete output classes under GDPR/DSA (nonconsensual nudity, minors), not broad safety rhetoric.

Elon Musk's position as a safety-first advocate in criticizing OpenAI appears contradictory given the ongoing investigations into his AI development, Grok, which is being probed for generating nonconsensual sexualized images. Musk has criticized OpenAI's safety record, particularly linking ChatGPT to psychological harm and suicides, while promoting his own Grok AI as safer [1], [4]. However, regulators across multiple jurisdictions, including the Irish Data Protection Commission and the EU, are investigating Grok under GDPR and the Digital Services Act for its role in producing nude or sexualized images of real people, including minors [2], [5]. Despite Musk's claims about Grok's superior safety, these investigations highlight substantial legal and ethical concerns with the technology [3], [6].

Sources · 2026-03-03