Skip to main content
2025-01-01

Question of the Day

Question of the day · 2026-04-29 ·

One question per day to look beyond the headlines.

When violent ChatGPT chats happen months before a shooting, what exactly triggers a duty to alert police?

Take-away Duty-to-warn hinges on a platform’s internal threat-classification threshold: shifting from “imminent” to “credible” turns earlier, ambiguous flags into reportable events.

The duty to alert police regarding violent interactions on ChatGPT depends on whether the threat is assessed as credible or imminent. OpenAI has updated its safety protocols to notify law enforcement of any credible threats detected in ChatGPT conversations, even if complete details about targets, means, or timing are not available [2], [4]. Previously, the account of the mass shooter in Tumbler Ridge was flagged months before the incident, but OpenAI did not notify police because they determined there was no imminent threat at that time [1], [3]. However, under new rules, OpenAI would notify authorities in similar situations [4].

Sources · 2026-04-30