Scorched Earth

After a long and rough March taking care of a bunch of personal/professional stuff and just needing a mental break from all this, I’m back to find the special kind of honesty you only get from billion-dollar companies when they stop pretending to be afraid and drop any pretense of caring about the public.

First, OpenAI has apparently decided that the best way to manage the risks of its increasingly powerful systems is not to reduce them, but to lawyer them out of existence.

According to WIRED, the company is backing an Illinois bill (hilariously named the “Artificial Intelligence Safety Act“) that would absolve AI developers of responsibility for “critical harms”, a clinical way of describing mass death, serious injury, or billion-dollar disasters so long as they didn’t mean it and filed the proper paperwork. (WIRED)

From the article: “The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website.”

And their timing, as always, is impeccable.

Because in the same week, Anthropic quietly demonstrated what these companies are really building. (The Guardian)

Its latest model is being kept from public release out of sheer terror, as it can autonomously discover and exploit thousands of previously unknown software vulnerabilities across every major system in use today. (The Times) It doesn’t just find bugs; it chains them together into working attacks, at a scale and speed that outpaces your best security team. (Tom’s Hardware)

And the leading response from the industry is to ensure that, when something inevitably goes horribly wrong, no one can be blamed.

The finance industry perfected the approach of making products and terminology so convoluted and confusing that even they, let alone the regulators could explain how dangerous they were in the lead up to the 2008 collapse (and every subsequent one after that).

And the real tragedy of the Great Recession is that nothing they were doing was illegal. They had been for years quietly and consistently changing laws, influencing public opinion, and setting up a framework for zero accountability.

And that strategy is being used right in front of us for AI.

Because if you can’t dazzle them with brilliance, baffle them with BS while making sure that whatever burns behind you can’t be traced back to the people holding the gas can and matches.

Scorched Earth profitability with minimal responsibility.


Discover more from Quality Remarks

Subscribe to get the latest posts sent to your email.

Leave a Reply