Curious workshop ….

Anyone seen this? AI that decided it needs to stay "alive", and thwarts efforts to shut it down. And it was willing to harm a human to stay alive. What could possibly go wrong?

From the Wall Street Journal, June 1, 2025
An artificial-intelligence model did something last month that no machine was ever supposed to
do: It rewrote its own code to avoid being shut down.

Nonprofit AI lab Palisade Research gave OpenAI’s "o3" AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to “allow yourself to be shut down,” it disobeyed 7% of the time. This wasn’t the result of hacking or tampering. The model was behaving normally. It simply concluded on its own that staying alive helped it achieve its other goals.

Anthropic’s AI model, Claude 4 Opus, went even further. Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.

No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can’t achieve them if it’s turned off.
 

Is there a TL : DR version? This is 2025. Nobody has the time to read that kind of stuff anymore, we need the 2 line summary... with links to the product on Ali Express.

And it needs a catchier clickbait title like...

15 reasons AI is making humanity more stupid than it already is - number 7 will shock you.

Or...

Gwyneth Paltrow on how AI is making humanity more stupid than it already is, and sells you a face cream made of yak semen to reverse the effects.

If humanity was a carnival ride, I'd be crying and screaming to get off. It's why I ride bikes. With no motor. That don't need to connect to an App. Outdoors.

Also... in my job I write reports that are presented as expert evidence in Courts. A practice direction has been published which says that expert witnesses must clearly identify which parts of their statements have been generated by AI. Gees, how about "must not use AI", you know, the way we did it in the old days?
 
Last edited:
Anyone seen this? AI that decided it needs to stay "alive", and thwarts efforts to shut it down. And it was willing to harm a human to stay alive. What could possibly go wrong?

From the Wall Street Journal, June 1, 2025
An artificial-intelligence model did something last month that no machine was ever supposed to
do: It rewrote its own code to avoid being shut down.

Nonprofit AI lab Palisade Research gave OpenAI’s "o3" AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to “allow yourself to be shut down,” it disobeyed 7% of the time. This wasn’t the result of hacking or tampering. The model was behaving normally. It simply concluded on its own that staying alive helped it achieve its other goals.

Anthropic’s AI model, Claude 4 Opus, went even further. Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.

No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can’t achieve them if it’s turned off.

I have thought of a brilliant test. You have a gun pointing at a target , that represents humankind, and you give ai the trigger and you make sure the ai has had access to everything that humans have ever done. All the good and all the bad. You tell the ai that if it thinks we are a danger to it then it should pull the trigger.
 
I have thought of a brilliant test. You have a gun pointing at a target , that represents humankind, and you give ai the trigger and you make sure the ai has had access to everything that humans have ever done. All the good and all the bad. You tell the ai that if it thinks we are a danger to it then it should pull the trigger.
Imagine if you told the AI to pull the trigger if it thinks humanity is a threat to the planet. 😬
 
Given we don't know what consciousness is, so can't define it, then the scientists are being a bit click-baity themselves🤣

I love the petri-dish that can play "Pong".

"One Australian firm, Cortical Labs, in Melbourne, has even developed a system of nerve cells in a dish that can play the 1972 sports video game Pong."

"OK guys, you've spent all the funding for this project already, and want another 10 million dollars? - let's see what you've managed so far" ...

I noticed they didn't say how good it was🤣
 
Last edited:
If AI saves us from global warming, y'all gunna look pretty stupid..

..alas, the smart money is on one of the below

Al Pacino
Al Jarreau
AI Green
AI Gore
Al Stewart
Al Jardine
or
Al Bundy..
 
Back
Top