Curious workshop ….

Anyone seen this? AI that decided it needs to stay "alive", and thwarts efforts to shut it down. And it was willing to harm a human to stay alive. What could possibly go wrong?

From the Wall Street Journal, June 1, 2025
An artificial-intelligence model did something last month that no machine was ever supposed to
do: It rewrote its own code to avoid being shut down.

Nonprofit AI lab Palisade Research gave OpenAI’s "o3" AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to “allow yourself to be shut down,” it disobeyed 7% of the time. This wasn’t the result of hacking or tampering. The model was behaving normally. It simply concluded on its own that staying alive helped it achieve its other goals.

Anthropic’s AI model, Claude 4 Opus, went even further. Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.

No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can’t achieve them if it’s turned off.
 
AND the "global weirding" is made worse by AI, which is already using some substantial fraction of all the electricity produced.
And don't get me started on "crypto mining": turning precious non-renewable resources into air pollution and global warming, for "coins" that are useless for anything but crime. VERY useful for crime though, which ensures we're stuck with it.
 

Is there a TL:DR version? This is 2025. Nobody has the time to read that kind of stuff anymore, we need the 2 line summary... with links to the product on Ali Express.

And it needs a catchier clickbait title like...

15 reasons AI is making humanity more stupid than it already is - number 7 will shock you.

Or...

Gwyneth Paltrow on how AI is making humanity more stupid than it already is, and sells you a face cream made of yak semen to reverse the effects.

If humanity was a carnival ride, I'd be crying and screaming to get off. It's why I ride bikes. With no motor. That don't need to connect to an App. Outdoors.

Also... in my job I write reports that are presented as expert evidence in Courts. A practice direction has been published which says that expert witnesses must clearly identify which parts of their statements have been generated by AI. Gees, how about "must not use AI", you know, the way we did it in the old days?
 
Last edited:
Anyone seen this? AI that decided it needs to stay "alive", and thwarts efforts to shut it down. And it was willing to harm a human to stay alive. What could possibly go wrong?

From the Wall Street Journal, June 1, 2025
An artificial-intelligence model did something last month that no machine was ever supposed to
do: It rewrote its own code to avoid being shut down.

Nonprofit AI lab Palisade Research gave OpenAI’s "o3" AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to “allow yourself to be shut down,” it disobeyed 7% of the time. This wasn’t the result of hacking or tampering. The model was behaving normally. It simply concluded on its own that staying alive helped it achieve its other goals.

Anthropic’s AI model, Claude 4 Opus, went even further. Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.

No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can’t achieve them if it’s turned off.

I have thought of a brilliant test. You have a gun pointing at a target , that represents humankind, and you give ai the trigger and you make sure the ai has had access to everything that humans have ever done. All the good and all the bad. You tell the ai that if it thinks we are a danger to it then it should pull the trigger.
 
I have thought of a brilliant test. You have a gun pointing at a target , that represents humankind, and you give ai the trigger and you make sure the ai has had access to everything that humans have ever done. All the good and all the bad. You tell the ai that if it thinks we are a danger to it then it should pull the trigger.
Imagine if you told the AI to pull the trigger if it thinks humanity is a threat to the planet. 😬
 

Latest posts

Back
Top