OpenAI Employees Raised Alarms Over Unreported ChatGPT Violence Threats
OpenAI employees internally raised alarms after the company failed to alert law enforcement when ChatGPT users described specific plans for real-world violence, including references to mass shooting scenarios, per a Wall Street Journal investigation. The employees flagged that standard duty-to-warn principles — applied routinely in licensed professional contexts — were not being followed. OpenAI has not publicly responded to the WSJ report's specific allegations.
Why It Matters
This story directly challenges OpenAI's safety narrative at a moment when the company is seeking to expand government and enterprise contracts. Failure to report credible threats could expose the company to regulatory and legal liability.