Question of the Day
One question per day to look beyond the headlines.
When violent ChatGPT chats happen months before a shooting, what exactly triggers a duty to alert police?
Take-away Duty-to-warn hinges on a platform’s internal threat-classification threshold: shifting from “imminent” to “credible” turns earlier, ambiguous flags into reportable events.
The duty to alert police regarding violent interactions on ChatGPT depends on whether the threat is assessed as credible or imminent. OpenAI has updated its safety protocols to notify law enforcement of any credible threats detected in ChatGPT conversations, even if complete details about targets, means, or timing are not available [2], [4]. Previously, the account of the mass shooter in Tumbler Ridge was flagged months before the incident, but OpenAI did not notify police because they determined there was no imminent threat at that time [1], [3]. However, under new rules, OpenAI would notify authorities in similar situations [4].
- OpenAI vows safety policy changes after Tumbler Ridge shooting bbc.com (opens in new tab)
- OpenAI will notify authorities of credible threats after Canada mass shooter's second account was discovered engadget.com (opens in new tab)
- OpenAI Would’ve Flagged Tumbler Ridge Suspect to Police Under New Rules - Bloomberg bloomberg.com (opens in new tab)
- OpenAI would have alerted police to Canadian shooter if account was discovered today. | The Verge theverge.com (opens in new tab)