In the aftermath of a mass shooting that left eight people dead in Tumbler Ridge, Canada, new details are emerging about the alleged gunman’s digital trail — and they are raising unsettling questions about the role of artificial intelligence platforms in detecting and responding to potential threats.
According to reporting from The Wall Street Journal, 18-year-old Jesse Van Rootselaar used OpenAI’s ChatGPT in ways that triggered internal monitoring systems designed to detect misuse. Her conversations reportedly included descriptions of gun violence significant enough to raise alarms within the company. As a result, her account was banned in June 2025.
Inside OpenAI, staff debated whether the behavior warranted alerting Canadian law enforcement. Ultimately, they decided it did not meet the company’s threshold for reporting at that time. Only after the shooting occurred did OpenAI proactively contact the Royal Canadian Mounted Police, offering information about Van Rootselaar’s use of the platform.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said, confirming cooperation with authorities.
The ChatGPT activity was only one component of a broader digital footprint. Van Rootselaar had also created a game on Roblox that simulated a mass shooting at a mall — a chilling parallel to real-world violence. Additionally, she reportedly posted about firearms on Reddit. These online behaviors, taken together, form a pattern that investigators are now scrutinizing closely.
Local authorities were not entirely unfamiliar with her instability. Police had previously been called to her family home after she allegedly started a fire while under the influence of drugs. That prior contact adds another layer to a portrait of escalating warning signs that, in hindsight, appear deeply troubling.
The case also falls back to talking about large language models and their societal impact. AI platforms like ChatGPT are programmed with safeguards intended to prevent the generation of harmful or violent content. Companies employ automated systems and human review processes to flag concerning interactions. Yet the Tumbler Ridge case highlights the difficulty of determining when disturbing online behavior crosses the line from protected speech to imminent threat.
Technology firms face a delicate balancing act. Over-reporting user activity risks privacy violations and accusations of surveillance overreach. Under-reporting can lead to devastating consequences if credible threats are missed. The internal debate reportedly held at OpenAI underscores how complex those decisions can be.







