Connect with us

Hi, what are you looking for?

Artificial Intelligence (AI)

Sam Altman apologises over missed alert in Tumbler Ridge case

OpenAI flagged suspect in 2025 but informed police only after shooting.

MUMBAI: A warning flagged, a call not made and the consequences now echo far beyond code. Sam Altman has issued a public apology to residents of Tumbler Ridge after OpenAI failed to alert law enforcement about a user later linked to a deadly mass shooting. The case centres on 18-year-old Jesse Van Rootselaar, identified by police as the suspected perpetrator in an attack that allegedly killed eight people. According to reports, OpenAI had flagged and banned the individual’s ChatGPT account in June 2025 after he described scenarios involving gun violence.

Despite internal discussions on whether to escalate the matter, the company chose not to inform authorities at the time. Canadian law enforcement was contacted only after the shooting had already taken place.

In a letter first published in a local newspaper, Altman acknowledged the lapse, saying he was “deeply sorry” the company did not alert authorities earlier. He added that while an apology cannot undo the harm, it was important to recognise the scale of loss experienced by the community.

The OpenAI chief also confirmed that he had engaged with local leadership, including David Eby and Darryl Krakowka, with agreement on the need for a public apology while allowing space for grieving.

In response to the incident, OpenAI said it is tightening its safety framework. The company plans to introduce more flexible escalation criteria for potentially harmful accounts and establish direct communication channels with Canadian law enforcement to avoid similar delays in future.

However, the apology has drawn a measured response. Eby noted that while necessary, it remains insufficient given the devastation faced by families in Tumbler Ridge.

The incident has also reignited policy conversations around AI accountability. Canadian officials are now considering potential regulatory measures for artificial intelligence systems, though no formal decisions have been announced.

As AI tools grow more embedded in everyday life, the episode underscores a difficult question: when a system spots a warning sign, where does responsibility truly begin and where should it end?

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Artificial Intelligence (AI)

PM says AI should empower workers, stay inclusive and be human-led

Artificial Intelligence (AI)

How a ten-year-old’s initial investment became a $70 million jackpot

Artificial Intelligence (AI)

MUMBAI: Elevenlabs has appointed Karthik Rajaram as general manager and country head for India, sharpening its push into one of the world’s fastest-growing markets...

Advertising

Broadcaster accused of arrogance and disrespect as fans slam Super 8 promotion

Copyright © 2026 Indian Television Dot Com PVT LTD.