Let's start with what we know happened in Tumbler Ridge. Someone opened a ChatGPT account and started typing things alarming enough that OpenAI's systems flagged them. The account was shut down. OpenAI's moderation worked, technically. A human or algorithm reviewed the content, made a judgment call, and pulled the plug. Then the company closed the browser tab and moved on. Nobody called the RCMP. Nobody called a crisis line. Nobody called anyone. The warning signs went into a corporate database, presumably checked a compliance box somewhere, and that was the end of it. People died.
The Gap No One Wanted to Talk About
British Columbia Premier David Eby didn't mince words after the Tumbler Ridge shooting. He got Sam Altman on the phone — directly — and made the case for why OpenAI needs to fundamentally rethink how it handles dangerous content [1]. Altman, to his credit, agreed to implement changes. BC's Minister of Public Safety followed up with an announcement: the province is now pursuing legal mandates that would require AI companies to report dangerous content to law enforcement. That's a significant step. And it raises a question that the tech industry has been working very hard not to answer: should AI companies face the same mandatory reporting obligations as, say, a school counselor?
In Canada and the United States, mandatory reporting laws already apply to teachers, doctors, nurses, social workers, childcare workers — anyone in a position of professional trust who encounters evidence of imminent danger. The theory is simple: private actors who have access to information that could prevent serious harm don't get to sit on it. They have a legal duty to act [1]. AI companies have, until now, operated outside that framework entirely. They have terms of service. They have moderation policies. They have internal review processes. What they don't have is any legal obligation to pick up the phone.

