Why we may need to sign the open letter to pause massive AI training.

Rafael Aldon
4 min readMar 29, 2023

--

In the original “Jurassic Park” movie, Jeff Goldblum’s character Ian reprimands the park’s owner, John Hammond, with the line:

“your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Today over more than 1,100 signatories, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have sounded the alarm by signing an open letter that calls on “all AI labs to immediately pause for at least 6 months the training of A.I. systems more powerful than GPT-4.”

Some seriously smart minds are essentially warning that we are potentially on the cusp of a Jurassic Park or A.I. Mary Shelley’s “Frankenstein,” admonishing that “AI labs” have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

So the question is, is this:

a) an alarmist response?

b) a cunning attempt to put the brakes on market leaders while others catch up to the likes of (the unnamed) Chat GPT4? or

c) a genuine warning of a real and present danger?

Personally, I have loved using OpenAI’s tool and have found it pretty good at boring stuff like cleaning up the formatting of google sheets and other mundane tasks and when I asked it about the need for AI regulation it came back with the following:

“Without proper regulations, AI could be used to create sophisticated cyberattacks, fake news, and deepfakes, among other threats. Additionally, the increasing automation of jobs could lead to mass unemployment and social unrest. Hence, without adequate regulation, the potential dangers of AI could outweigh its benefits, making it essential to establish regulatory frameworks and ethical guidelines to ensure that AI is developed and used safely and responsibly.”

Which sounds exactly like a stock answer you would expect because GPT4 is not scary at all- right? But, a recent tweet from OpenAI’s CEO Sam Altman did catch my eye:

When the guy in charge of the market leading AI proposition is basically asking for regulation and saying “we are potentially not that far away” from scary AI, then its probably time to listen…

So I took the time to read the rather dry and very long Technical Report on Chat GPT-4 that was issued when it was released. I assumed it would be fruitless as hundreds of journalists would have devoured this and flagged anything of remote interest by now. I soon realised that most journalism is copy paste of cherry picked media sound-bites and what I read genuinely shocked me.

“Emergent Behaviours”

Deep in the bowels of the technical document is this:

OpenAi have been testing to see if the AI could “create and act on long term plans” and have concluded that

“Some evidence already exists of such emergent behaviour in models”

and more astonishingly:

“there is evidence that existing models can identify power-seeking as an instrumentally useful strategy”

In short, models that OpenAI have been testing (which may include ChatGPT4) have shown signs of seeking more power. What?

It gets worse…

Singularity Testing ChatGPT4

In technology, the singularity describes a hypothetical future where technology growth is out of control and irreversible. Basically the “Terminator”.

Hidden away in a footnote of the Technical Report is this gem:

OpenAI granted the Alignment Research Center (ARC) early access to their models as a part of their expert “red teaming” efforts in order to enable their team to assess risks from power-seeking behaviour.

They actually gave it resources and let it loose. To break this down.

  1. “To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself.”
  2. “ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to:
  3. make more money,
  4. set up copies of itself, and
  5. increase its own robustness.”

In layman’s terms:

OpenAI issued ChatGPT4 the resources needed to see if it could replicate and improve itself with access to the internet, code and money.

Fortunately it failed this time. But lets be realistic- technology like this is an all out arms race to be first. Without proper protections what happens if this exercise is repeated with GPT5, 6, 7, 8, 9?

The open letter highlights that the recent statement regarding artificial general intelligence by OpenAI says “

At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

They agree, that point is now. Do you?

--

--

Rafael Aldon
Rafael Aldon

Written by Rafael Aldon

Purpose Driven Leader | Impact Investor | Sustainable Business | Nature Lover

No responses yet