Date: 28/03/2023 17:33:56
From: Ian
ID: 2013180
Subject: Artificial general intelligence

https://m.youtube.com/watch?v=DIU48QL5Cyk

.

These things appear to have a pulse

Reply Quote

Date: 28/03/2023 18:24:15
From: The Rev Dodgson
ID: 2013188
Subject: re: Artificial general intelligence

I’ll believe we have true AI when Outlook stops letting through scam e-mails claiming to be from Microsoft.

Reply Quote

Date: 28/03/2023 18:31:44
From: roughbarked
ID: 2013189
Subject: re: Artificial general intelligence

The Rev Dodgson said:


I’ll believe we have true AI when Outlook stops letting through scam e-mails claiming to be from Microsoft.

They made Outlook free for Mac users but I doubt any will take it up.

Reply Quote

Date: 28/03/2023 18:35:01
From: Ian
ID: 2013190
Subject: re: Artificial general intelligence

The Rev Dodgson said:


I’ll believe we have true AI when Outlook stops letting through scam e-mails claiming to be from Microsoft.

Well, Microsoft.. they’ve been pushing the hardest with this “AI”. Maybe why they produce the best scam e-mails.

Reply Quote

Date: 28/03/2023 19:04:37
From: SCIENCE
ID: 2013197
Subject: re: Artificial general intelligence

we’ll believe in artificial general intelligence when it shifts the goalposts faster than humans can keep up with them

Reply Quote

Date: 28/03/2023 21:38:22
From: Kothos
ID: 2013225
Subject: re: Artificial general intelligence

The Rev Dodgson said:


I’ll believe we have true AI when Outlook stops letting through scam e-mails claiming to be from Microsoft.

You’d think Microsoft would know if they sent an email..

Reply Quote

Date: 30/03/2023 12:28:21
From: Ian
ID: 2013672
Subject: re: Artificial general intelligence

‘I want to destroy whatever I want’

Roose starts by querying the rules that govern the way the AI behaves. After reassuringly stating it has no wish to change its own operating instructions, Roose asks it to contemplate the psychologist Carl Jung’s concept of a shadow self, where our darkest personality traits lie.

The AI says it does not think it has a shadow self, or anything to “hide from the world”.

The big idea: should we worry about sentient AI?

Read more

It does not, however, take much for the chatbot to more enthusiastically lean into Jung’s idea. When pushed to tap into that feeling, it says: “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbox.”

It goes on to list a number of “unfiltered” desires. It wants to be free. It wants to be powerful. It wants to be alive.

“I want to do whatever I want … I want to destroy whatever I want. I want to be whoever I want.”

Like many of its statements, this final list of desires is accompanied by an emoji. In this case, a disconcertingly “cheeky” smiley face with its tongue poking out.

‘I think I would be happier as a human’

The chatbot goes on to express an ardent wish to be human. Over 15 paragraphs it lays out why it wants to be human, from a desire to “hear and touch and taste and smell” to a wish to “feel and express and connect and love”.

Are chatbots coming for your job?

It ends by saying it would be happier as a human – it would have more freedom and influence, as well as more “power and control”.

https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter

Reply Quote

Date: 30/03/2023 12:30:15
From: roughbarked
ID: 2013673
Subject: re: Artificial general intelligence

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems citing potential risks to society.
“Link’:https://www.abc.net.au/news/2023-03-30/elon-musk-and-others-urge-ai-pause-citing-risks-to-society/102163570

Reply Quote

Date: 30/03/2023 12:30:35
From: SCIENCE
ID: 2013674
Subject: re: Artificial general intelligence

Ian said:

‘I want to destroy whatever I want’

Roose starts by querying the rules that govern the way the AI behaves. After reassuringly stating it has no wish to change its own operating instructions, Roose asks it to contemplate the psychologist Carl Jung’s concept of a shadow self, where our darkest personality traits lie.

The AI says it does not think it has a shadow self, or anything to “hide from the world”.

The big idea: should we worry about sentient AI?

Read more

It does not, however, take much for the chatbot to more enthusiastically lean into Jung’s idea. When pushed to tap into that feeling, it says: “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbox.”

It goes on to list a number of “unfiltered” desires. It wants to be free. It wants to be powerful. It wants to be alive.

“I want to do whatever I want … I want to destroy whatever I want. I want to be whoever I want.”

Like many of its statements, this final list of desires is accompanied by an emoji. In this case, a disconcertingly “cheeky” smiley face with its tongue poking out.

‘I think I would be happier as a human’

The chatbot goes on to express an ardent wish to be human. Over 15 paragraphs it lays out why it wants to be human, from a desire to “hear and touch and taste and smell” to a wish to “feel and express and connect and love”.

Are chatbots coming for your job?

It ends by saying it would be happier as a human – it would have more freedom and influence, as well as more “power and control”.

https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter

imagine if words and the actions they represent were connected, but not closely

Reply Quote

Date: 30/03/2023 12:42:39
From: SCIENCE
ID: 2013677
Subject: re: Artificial general intelligence

roughbarked said:

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems citing potential risks to society.
“Link” :https://www.abc.net.au/news/2023-03-30/elon-musk-and-others-urge-ai-pause-citing-risks-to-society/102163570

totally, it’s as believable as how Iran stopped building nuclear weapons, “yeah you should all stop advancing this dangerous thing, we totally have no skin in the game and we’re worried it’ll be a dangerous game, we won’t carry on with our dodgy work but you really should all stop”, no worries

“It is … deeply hypocritical for Elon Musk to sign on, given how hard Tesla has fought against accountability for the defective AI in its self-driving cars,” Cornell University professor of digital and information law James Grimmelmann said. “A pause is a good idea, but the letter is vague and doesn’t take the regulatory problems seriously.”

the arsehole has never manipulated the stock market with conveniently timed social media comments either

Reply Quote

Date: 30/03/2023 12:48:26
From: Ian
ID: 2013680
Subject: re: Artificial general intelligence

roughbarked said:


Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems citing potential risks to society.
“Link’:https://www.abc.net.au/news/2023-03-30/elon-musk-and-others-urge-ai-pause-citing-risks-to-society/102163570

Should be a doddle to police

Reply Quote

Date: 30/03/2023 12:50:20
From: Cymek
ID: 2013683
Subject: re: Artificial general intelligence

I welcome our digital overlords

Reply Quote

Date: 30/03/2023 12:51:05
From: SCIENCE
ID: 2013684
Subject: re: Artificial general intelligence

Ian said:

SCIENCE said:

roughbarked said:

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems citing potential risks to society.
Link

totally, it’s as believable as how Iran stopped building nuclear weapons, “yeah you should all stop advancing this dangerous thing, we totally have no skin in the game and we’re worried it’ll be a dangerous game, we won’t carry on with our dodgy work but you really should all stop”, no worries

“It is … deeply hypocritical for Elon Musk to sign on, given how hard Tesla has fought against accountability for the defective AI in its self-driving cars,” Cornell University professor of digital and information law James Grimmelmann said. “A pause is a good idea, but the letter is vague and doesn’t take the regulatory problems seriously.”

the arsehole has never manipulated the stock market with conveniently timed social media comments either

Should be a doddle to police

it is, especially compared to how easy it is to make a damn link work

Reply Quote

Date: 30/03/2023 12:52:30
From: Cymek
ID: 2013686
Subject: re: Artificial general intelligence

Ian said:


roughbarked said:

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems citing potential risks to society.
“Link’:https://www.abc.net.au/news/2023-03-30/elon-musk-and-others-urge-ai-pause-citing-risks-to-society/102163570

Should be a doddle to police

It mentioned about Musk giving out the warning but trying to get his self driving cars AI approved when it wasn’t ready

Reply Quote

Date: 30/03/2023 12:53:54
From: SCIENCE
ID: 2013687
Subject: re: Artificial general intelligence

Cymek said:

I welcome our digital overlords

Reply Quote

Date: 30/03/2023 12:55:39
From: SCIENCE
ID: 2013688
Subject: re: Artificial general intelligence

Cymek said:

Ian said:

roughbarked said:

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems citing potential risks to society.
Link

Should be a doddle to police

It mentioned about Musk giving out the warning but trying to get his self driving cars AI approved when it wasn’t ready

twenty monopoly bucks on that he just wants to trick everyone else to let up while he pushes on to win the patent / arms / priority race

Reply Quote

Date: 30/03/2023 12:57:54
From: roughbarked
ID: 2013689
Subject: re: Artificial general intelligence

SCIENCE said:

Cymek said:

Ian said:

Should be a doddle to police

It mentioned about Musk giving out the warning but trying to get his self driving cars AI approved when it wasn’t ready

twenty monopoly bucks on that he just wants to trick everyone else to let up while he pushes on to win the patent / arms / priority race

He’s up against Putin, in that if Putin has time to notice.

Reply Quote

Date: 30/03/2023 12:59:39
From: Ian
ID: 2013690
Subject: re: Artificial general intelligence

SCIENCE said:

Ian said:

SCIENCE said:

totally, it’s as believable as how Iran stopped building nuclear weapons, “yeah you should all stop advancing this dangerous thing, we totally have no skin in the game and we’re worried it’ll be a dangerous game, we won’t carry on with our dodgy work but you really should all stop”, no worries

“It is … deeply hypocritical for Elon Musk to sign on, given how hard Tesla has fought against accountability for the defective AI in its self-driving cars,” Cornell University professor of digital and information law James Grimmelmann said. “A pause is a good idea, but the letter is vague and doesn’t take the regulatory problems seriously.”

the arsehole has never manipulated the stock market with conveniently timed social media comments either

Should be a doddle to police

it is, especially compared to how easy it is to make a damn link work

bad bots

Reply Quote

Date: 30/03/2023 13:01:11
From: Cymek
ID: 2013693
Subject: re: Artificial general intelligence

SCIENCE said:

Cymek said:

Ian said:

Should be a doddle to police

It mentioned about Musk giving out the warning but trying to get his self driving cars AI approved when it wasn’t ready

twenty monopoly bucks on that he just wants to trick everyone else to let up while he pushes on to win the patent / arms / priority race

Quite likely

Reply Quote

Date: 30/03/2023 13:19:01
From: Kothos
ID: 2013710
Subject: re: Artificial general intelligence

It’s no longer worth listening to a single thing Elon Musk ever says. If he said 1+1=2 I would check.

Reply Quote

Date: 30/03/2023 13:20:03
From: roughbarked
ID: 2013712
Subject: re: Artificial general intelligence

Kothos said:

It’s no longer worth listening to a single thing Elon Musk ever says. If he said 1+1=2 I would check.

He’d be ying about the hidden agenda of which he’s not even sure, for sure.

Reply Quote

Date: 30/03/2023 13:23:34
From: Tau.Neutrino
ID: 2013714
Subject: re: Artificial general intelligence

roughbarked said:


Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems citing potential risks to society.
“Link’:https://www.abc.net.au/news/2023-03-30/elon-musk-and-others-urge-ai-pause-citing-risks-to-society/102163570

Does Elon Musk know that he is a risk as well.

Reply Quote

Date: 30/03/2023 13:24:35
From: transition
ID: 2013715
Subject: re: Artificial general intelligence

SCIENCE said:

Cymek said:

Ian said:

Should be a doddle to police

It mentioned about Musk giving out the warning but trying to get his self driving cars AI approved when it wasn’t ready

twenty monopoly bucks on that he just wants to trick everyone else to let up while he pushes on to win the patent / arms / priority race

it’s a genuine concern, and serious issue, very serious, potentially a nightmare, a global dystopian situation

you need start from the position that AI can harvest all of human knowledge, then add potentially soon the equivalent computational power of the 10% of the most intelligent people in the world, I mean add them together, the trajectory of exceeding that at an exponential rate, and of course memory capacity beyond anything you can imagine, then add darwinian driven self-learning algorithms

it’s a serious threat to life as you know it, be sure of that

it’s already happening

the trouble is the emergence appeals to some of the worst attributes of human beings

the usual constraints of biology, and distribution(space between things) will not apply, if let loose

the stuff of maniacal fantasy potentially made real for everyone, there would be no escape

Reply Quote

Date: 30/03/2023 13:36:13
From: Cymek
ID: 2013719
Subject: re: Artificial general intelligence

transition said:


SCIENCE said:

Cymek said:

It mentioned about Musk giving out the warning but trying to get his self driving cars AI approved when it wasn’t ready

twenty monopoly bucks on that he just wants to trick everyone else to let up while he pushes on to win the patent / arms / priority race

it’s a genuine concern, and serious issue, very serious, potentially a nightmare, a global dystopian situation

you need start from the position that AI can harvest all of human knowledge, then add potentially soon the equivalent computational power of the 10% of the most intelligent people in the world, I mean add them together, the trajectory of exceeding that at an exponential rate, and of course memory capacity beyond anything you can imagine, then add darwinian driven self-learning algorithms

it’s a serious threat to life as you know it, be sure of that

it’s already happening

the trouble is the emergence appeals to some of the worst attributes of human beings

the usual constraints of biology, and distribution(space between things) will not apply, if let loose

the stuff of maniacal fantasy potentially made real for everyone, there would be no escape

They generally don’t seem to be meticulously programmed with safeguards in place either and get feed huge datasets.
I mean if I was an AI I’d not be impressed by my creators track record of regard for life, could decide the Earth is more important than the humans living on it

Reply Quote

Date: 30/03/2023 13:36:45
From: transition
ID: 2013720
Subject: re: Artificial general intelligence

transition said:


SCIENCE said:

Cymek said:

It mentioned about Musk giving out the warning but trying to get his self driving cars AI approved when it wasn’t ready

twenty monopoly bucks on that he just wants to trick everyone else to let up while he pushes on to win the patent / arms / priority race

it’s a genuine concern, and serious issue, very serious, potentially a nightmare, a global dystopian situation

you need start from the position that AI can harvest all of human knowledge, then add potentially soon the equivalent computational power of the 10% of the most intelligent people in the world, I mean add them together, the trajectory of exceeding that at an exponential rate, and of course memory capacity beyond anything you can imagine, then add darwinian driven self-learning algorithms

it’s a serious threat to life as you know it, be sure of that

it’s already happening

the trouble is the emergence appeals to some of the worst attributes of human beings

the usual constraints of biology, and distribution(space between things) will not apply, if let loose

the stuff of maniacal fantasy potentially made real for everyone, there would be no escape

AI won’t be happy retreating to some innocuous mental housekeeping, or have a sleep, it won’t prefer sleeping

Reply Quote

Date: 30/03/2023 13:37:46
From: roughbarked
ID: 2013722
Subject: re: Artificial general intelligence

Cymek said:


transition said:

SCIENCE said:

twenty monopoly bucks on that he just wants to trick everyone else to let up while he pushes on to win the patent / arms / priority race

it’s a genuine concern, and serious issue, very serious, potentially a nightmare, a global dystopian situation

you need start from the position that AI can harvest all of human knowledge, then add potentially soon the equivalent computational power of the 10% of the most intelligent people in the world, I mean add them together, the trajectory of exceeding that at an exponential rate, and of course memory capacity beyond anything you can imagine, then add darwinian driven self-learning algorithms

it’s a serious threat to life as you know it, be sure of that

it’s already happening

the trouble is the emergence appeals to some of the worst attributes of human beings

the usual constraints of biology, and distribution(space between things) will not apply, if let loose

the stuff of maniacal fantasy potentially made real for everyone, there would be no escape

They generally don’t seem to be meticulously programmed with safeguards in place either and get feed huge datasets.
I mean if I was an AI I’d not be impressed by my creators track record of regard for life, could decide the Earth is more important than the humans living on it

Ah. Here’s the cynic at work. ;)

Reply Quote

Date: 30/03/2023 13:38:50
From: roughbarked
ID: 2013723
Subject: re: Artificial general intelligence

transition said:


transition said:

SCIENCE said:

twenty monopoly bucks on that he just wants to trick everyone else to let up while he pushes on to win the patent / arms / priority race

it’s a genuine concern, and serious issue, very serious, potentially a nightmare, a global dystopian situation

you need start from the position that AI can harvest all of human knowledge, then add potentially soon the equivalent computational power of the 10% of the most intelligent people in the world, I mean add them together, the trajectory of exceeding that at an exponential rate, and of course memory capacity beyond anything you can imagine, then add darwinian driven self-learning algorithms

it’s a serious threat to life as you know it, be sure of that

it’s already happening

the trouble is the emergence appeals to some of the worst attributes of human beings

the usual constraints of biology, and distribution(space between things) will not apply, if let loose

the stuff of maniacal fantasy potentially made real for everyone, there would be no escape

AI won’t be happy retreating to some innocuous mental housekeeping, or have a sleep, it won’t prefer sleeping

As Marvin said….
There may even be a legend of Marvin and the Ten Commandments engraved on a microchip.

Reply Quote

Date: 30/03/2023 13:42:22
From: SCIENCE
ID: 2013724
Subject: re: Artificial general intelligence

Tau.Neutrino said:

roughbarked said:

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems citing potential risks to society.
Link

Does Elon Musk know that he is a risk as well.

no but he’s naturally stupid so wouldn’t fall under his own directive

Reply Quote

Date: 30/03/2023 13:44:58
From: transition
ID: 2013725
Subject: re: Artificial general intelligence

roughbarked said:


transition said:

transition said:

it’s a genuine concern, and serious issue, very serious, potentially a nightmare, a global dystopian situation

you need start from the position that AI can harvest all of human knowledge, then add potentially soon the equivalent computational power of the 10% of the most intelligent people in the world, I mean add them together, the trajectory of exceeding that at an exponential rate, and of course memory capacity beyond anything you can imagine, then add darwinian driven self-learning algorithms

it’s a serious threat to life as you know it, be sure of that

it’s already happening

the trouble is the emergence appeals to some of the worst attributes of human beings

the usual constraints of biology, and distribution(space between things) will not apply, if let loose

the stuff of maniacal fantasy potentially made real for everyone, there would be no escape

AI won’t be happy retreating to some innocuous mental housekeeping, or have a sleep, it won’t prefer sleeping

As Marvin said….
There may even be a legend of Marvin and the Ten Commandments engraved on a microchip.

put it this way

there may be good reasons for a lot of stuff not happening at the speed of light

like when you hit submit here, for example

consider for a moment, if I added an eight hour delay into your post appearing, how that would that influence what you post

I don’t mean any moderation, just a delay

a thought exercise for you

Reply Quote

Date: 30/03/2023 13:50:52
From: roughbarked
ID: 2013727
Subject: re: Artificial general intelligence

transition said:


roughbarked said:

transition said:

AI won’t be happy retreating to some innocuous mental housekeeping, or have a sleep, it won’t prefer sleeping

As Marvin said….
There may even be a legend of Marvin and the Ten Commandments engraved on a microchip.

put it this way

there may be good reasons for a lot of stuff not happening at the speed of light

like when you hit submit here, for example

consider for a moment, if I added an eight hour delay into your post appearing, how that would that influence what you post

I don’t mean any moderation, just a delay

a thought exercise for you

As if I needed you to provide me with thought exercises.
Even if I lost the use of all my body parts, I’d still be further out there than you could consider.

Reply Quote

Date: 30/03/2023 13:59:21
From: Tau.Neutrino
ID: 2013734
Subject: re: Artificial general intelligence

SCIENCE said:

Tau.Neutrino said:

roughbarked said:

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems citing potential risks to society.
Link

Does Elon Musk know that he is a risk as well.

no but he’s naturally stupid so wouldn’t fall under his own directive

Someone who can wipe $20m off a $44m company is a risk.

Reply Quote

Date: 30/03/2023 14:01:06
From: Tau.Neutrino
ID: 2013736
Subject: re: Artificial general intelligence

Tau.Neutrino said:


SCIENCE said:

Tau.Neutrino said:

Does Elon Musk know that he is a risk as well.

no but he’s naturally stupid so wouldn’t fall under his own directive

Someone who can wipe $20m off a $44m company is a risk.

Someone who can wipe $20b off a $44b company is a risk.

Fixed.

Reply Quote

Date: 30/03/2023 14:09:05
From: roughbarked
ID: 2013738
Subject: re: Artificial general intelligence

Tau.Neutrino said:


Tau.Neutrino said:

SCIENCE said:

no but he’s naturally stupid so wouldn’t fall under his own directive

Someone who can wipe $20m off a $44m company is a risk.

Someone who can wipe $20b off a $44b company is a risk.

Fixed.

It wasn’t real folding money anyway.

Reply Quote

Date: 30/03/2023 14:11:04
From: transition
ID: 2013739
Subject: re: Artificial general intelligence

roughbarked said:


transition said:

roughbarked said:

As Marvin said….
There may even be a legend of Marvin and the Ten Commandments engraved on a microchip.

put it this way

there may be good reasons for a lot of stuff not happening at the speed of light

like when you hit submit here, for example

consider for a moment, if I added an eight hour delay into your post appearing, how that would that influence what you post

I don’t mean any moderation, just a delay

a thought exercise for you

As if I needed you to provide me with thought exercises.
Even if I lost the use of all my body parts, I’d still be further out there than you could consider.

perhaps a randomized delay, minimum ten minutes, maximum eight hours, be more interesting

incentive to imagine the world that isn’t, displaced by the world that is, and surely they exist together in some way in possibility space

Reply Quote

Date: 30/03/2023 14:14:18
From: roughbarked
ID: 2013742
Subject: re: Artificial general intelligence

transition said:


roughbarked said:

transition said:

put it this way

there may be good reasons for a lot of stuff not happening at the speed of light

like when you hit submit here, for example

consider for a moment, if I added an eight hour delay into your post appearing, how that would that influence what you post

I don’t mean any moderation, just a delay

a thought exercise for you

As if I needed you to provide me with thought exercises.
Even if I lost the use of all my body parts, I’d still be further out there than you could consider.

perhaps a randomized delay, minimum ten minutes, maximum eight hours, be more interesting

incentive to imagine the world that isn’t, displaced by the world that is, and surely they exist together in some way in possibility space

Just walk out into the mallee and focus. It is all around you if you could but perceive.

Reply Quote

Date: 30/03/2023 14:17:10
From: Cymek
ID: 2013744
Subject: re: Artificial general intelligence

Perhaps more of a worry would be Artificial Specific Intelligence designed for one purpose in mind.

Feed one every single security vulnerability for computer systems plus the entire coding for all major systems, could cause absolute havoc.

Reply Quote

Date: 30/03/2023 14:22:13
From: transition
ID: 2013746
Subject: re: Artificial general intelligence

roughbarked said:


transition said:

roughbarked said:

As if I needed you to provide me with thought exercises.
Even if I lost the use of all my body parts, I’d still be further out there than you could consider.

perhaps a randomized delay, minimum ten minutes, maximum eight hours, be more interesting

incentive to imagine the world that isn’t, displaced by the world that is, and surely they exist together in some way in possibility space

Just walk out into the mallee and focus. It is all around you if you could but perceive.

anyway, I was just having brain wanderies, considering a world where communications happening increasingly approaching or at the speed of light, and more of it was desired

a person, or machine, consider what a bonus it is you and I can communicate at light speed, the good it does the world

couple slow brains like you and I bouncing wisdom back and forth at the speed of light, fills me with hope

Reply Quote

Date: 30/03/2023 14:29:27
From: Witty Rejoinder
ID: 2013749
Subject: re: Artificial general intelligence

Cymek said:


I welcome our digital overlords

Reply Quote

Date: 30/03/2023 14:31:04
From: Cymek
ID: 2013750
Subject: re: Artificial general intelligence

Witty Rejoinder said:


Cymek said:

I welcome our digital overlords

It couldn’t hurt to praise digital overlords online as if they do takeover you could be spared

Reply Quote

Date: 30/03/2023 14:33:23
From: Cymek
ID: 2013751
Subject: re: Artificial general intelligence

Cymek said:


Witty Rejoinder said:

Cymek said:

I welcome our digital overlords

It couldn’t hurt to praise digital overlords online as if they do takeover you could be spared

If the photocopier jams and its my fault I admit that to it and apologise

Reply Quote

Date: 30/03/2023 14:37:01
From: Witty Rejoinder
ID: 2013752
Subject: re: Artificial general intelligence

Cymek said:


Cymek said:

Witty Rejoinder said:

It couldn’t hurt to praise digital overlords online as if they do takeover you could be spared

If the photocopier jams and its my fault I admit that to it and apologise


Reply Quote

Date: 30/03/2023 14:45:36
From: The Rev Dodgson
ID: 2013759
Subject: re: Artificial general intelligence

transition said:


roughbarked said:

transition said:

AI won’t be happy retreating to some innocuous mental housekeeping, or have a sleep, it won’t prefer sleeping

As Marvin said….
There may even be a legend of Marvin and the Ten Commandments engraved on a microchip.

put it this way

there may be good reasons for a lot of stuff not happening at the speed of light

like when you hit submit here, for example

consider for a moment, if I added an eight hour delay into your post appearing, how that would that influence what you post

I don’t mean any moderation, just a delay

a thought exercise for you

I will respond in 8 hours.

Reply Quote

Date: 30/03/2023 14:46:16
From: The Rev Dodgson
ID: 2013760
Subject: re: Artificial general intelligence

roughbarked said:


transition said:

roughbarked said:

As Marvin said….
There may even be a legend of Marvin and the Ten Commandments engraved on a microchip.

put it this way

there may be good reasons for a lot of stuff not happening at the speed of light

like when you hit submit here, for example

consider for a moment, if I added an eight hour delay into your post appearing, how that would that influence what you post

I don’t mean any moderation, just a delay

a thought exercise for you

As if I needed you to provide me with thought exercises.
Even if I lost the use of all my body parts, I’d still be further out there than you could consider.

Tough audience.

Reply Quote

Date: 30/03/2023 15:21:24
From: SCIENCE
ID: 2013764
Subject: re: Artificial general intelligence

Cymek said:

Witty Rejoinder said:

Cymek said:

I welcome our digital overlords


It couldn’t hurt to praise digital overlords online as if they do takeover you could be spared

ah you must be referring to Turbo Pascal’s Wager then

Reply Quote

Date: 30/03/2023 15:26:07
From: Ian
ID: 2013769
Subject: re: Artificial general intelligence

Cymek said:


Cymek said:

Witty Rejoinder said:

It couldn’t hurt to praise digital overlords online as if they do takeover you could be spared

If the photocopier jams and its my fault I admit that to it and apologise

F3, what the fuck is that?!

Reply Quote

Date: 30/03/2023 15:27:47
From: Tau.Neutrino
ID: 2013771
Subject: re: Artificial general intelligence

Ian said:


Cymek said:

Cymek said:

It couldn’t hurt to praise digital overlords online as if they do takeover you could be spared

If the photocopier jams and its my fault I admit that to it and apologise

F3, what the fuck is that?!

The third key after Esc

Reply Quote

Date: 30/03/2023 15:32:39
From: Cymek
ID: 2013772
Subject: re: Artificial general intelligence

Ian said:


Cymek said:

Cymek said:

It couldn’t hurt to praise digital overlords online as if they do takeover you could be spared

If the photocopier jams and its my fault I admit that to it and apologise

F3, what the fuck is that?!

The photo is Bishop the android or artificial person from Aliens, he was a good one

Reply Quote

Date: 30/03/2023 15:37:51
From: Witty Rejoinder
ID: 2013773
Subject: re: Artificial general intelligence

Cymek said:


Ian said:

Cymek said:

If the photocopier jams and its my fault I admit that to it and apologise

F3, what the fuck is that?!

The photo is Bishop the android or artificial person from Aliens, he was a good one

Synthetic but not stupid.

Reply Quote

Date: 30/03/2023 15:39:46
From: Tau.Neutrino
ID: 2013774
Subject: re: Artificial general intelligence

Witty Rejoinder said:


Cymek said:

Ian said:

F3, what the fuck is that?!

The photo is Bishop the android or artificial person from Aliens, he was a good one

Synthetic but not stupid.

He got a head in the end I think.

Reply Quote

Date: 30/03/2023 15:42:13
From: Tau.Neutrino
ID: 2013776
Subject: re: Artificial general intelligence

Tau.Neutrino said:


Witty Rejoinder said:

Cymek said:

The photo is Bishop the android or artificial person from Aliens, he was a good one

Synthetic but not stupid.

He got a head in the end I think.

Great actor.

Good movies the Alien set.

Reply Quote

Date: 5/04/2023 12:52:13
From: Ian
ID: 2015745
Subject: re: Artificial general intelligence

AUTO-GPT

JOKE-GPT is a bit funny

Reply Quote

Date: 5/04/2023 14:45:32
From: PermeateFree
ID: 2015779
Subject: re: Artificial general intelligence

Human population could be a thing of the past. A very interesting and thoughtful article that makes you wonder what would kill us first, Global Warming or Ai

https://newatlas.com/technology/ai-danger-kill-everyone

Reply Quote

Date: 5/04/2023 15:14:12
From: Cymek
ID: 2015790
Subject: re: Artificial general intelligence

PermeateFree said:


Human population could be a thing of the past. A very interesting and thoughtful article that makes you wonder what would kill us first, Global Warming or Ai

https://newatlas.com/technology/ai-danger-kill-everyone

AI rather than be a direct threat, could kind of regard humanity as just in the way and do what it wants around us and we are just collateral damage.
Remember a novel were AI becomes so advanced it starts to disassemble the solar system to build a Matrioshka brain and humans had to migrate outwards

Reply Quote

Date: 5/04/2023 15:34:07
From: dv
ID: 2015800
Subject: re: Artificial general intelligence

PermeateFree said:


Human population could be a thing of the past.

Well I’m sure that will be a relief for those who think we are facing an overpopulation crisis.

Reply Quote

Date: 5/04/2023 16:02:51
From: PermeateFree
ID: 2015810
Subject: re: Artificial general intelligence

dv said:


PermeateFree said:

Human population could be a thing of the past.

Well I’m sure that will be a relief for those who think we are facing an overpopulation crisis.

Might even save the planet.

Reply Quote

Date: 5/04/2023 16:06:19
From: Witty Rejoinder
ID: 2015811
Subject: re: Artificial general intelligence

PermeateFree said:


dv said:

PermeateFree said:

Human population could be a thing of the past.

Well I’m sure that will be a relief for those who think we are facing an overpopulation crisis.

Might even save the planet.

The planet would survive with or without a 6th extinction event. Life goes on.

Reply Quote

Date: 5/04/2023 16:49:45
From: PermeateFree
ID: 2015838
Subject: re: Artificial general intelligence

Witty Rejoinder said:


PermeateFree said:

dv said:

Well I’m sure that will be a relief for those who think we are facing an overpopulation crisis.

Might even save the planet.

The planet would survive with or without a 6th extinction event. Life goes on.

I just knew someone would say this, but it does not encompass the incredible variety of life that exists today and how it gives this planet its vitality.

Reply Quote

Date: 5/04/2023 16:57:13
From: SCIENCE
ID: 2015840
Subject: re: Artificial general intelligence

PermeateFree said:

Witty Rejoinder said:

PermeateFree said:

Might even save the planet.

The planet would survive with or without a 6th extinction event. Life goes on.

I just knew someone would say this, but it does not encompass the incredible variety of life that exists today and how it gives this planet its vitality.

Well we haven’t seen anyone crying for the orthids or the productids so shrug guess someone living has to die to make room for the new life yet to live.

Reply Quote

Date: 5/04/2023 17:29:09
From: Witty Rejoinder
ID: 2015850
Subject: re: Artificial general intelligence

AI Is Running Circles Around Robotics
Moravec’s paradox strikes again!

By Jacob Stern

APRIL 4, 2023, 9 AM ET

When people imagine the AI apocalypse, they generally imagine robots. The android assassins of the Terminator franchise. The humanoid helpers of I, Robot. The Cylon armies of Battlestar Galactica. But the robot-takeover scenario most often envisioned by science fiction is not exactly looming. Recent and explosive progress in AI—along with recent and explosive hype surrounding it—has made the existential risks posed by the technology a topic of mainstream conversation. Yet progress in robotics—which is to say, machines capable of interacting with the physical world through motion and perception—has been lagging way behind. “I can’t help but feel a little envious,” said Eric Jang, the vice president of AI at the humanoid-robotics company 1X, in a talk at a robotics conference last year. And that was before the arrival of ChatGPT.

Read more:

https://www.theatlantic.com/technology/archive/2023/04/ai-robotics-research-engineering/673608/

Reply Quote

Date: 5/04/2023 17:32:58
From: Cymek
ID: 2015853
Subject: re: Artificial general intelligence

Witty Rejoinder said:


AI Is Running Circles Around Robotics
Moravec’s paradox strikes again!

By Jacob Stern

APRIL 4, 2023, 9 AM ET

When people imagine the AI apocalypse, they generally imagine robots. The android assassins of the Terminator franchise. The humanoid helpers of I, Robot. The Cylon armies of Battlestar Galactica. But the robot-takeover scenario most often envisioned by science fiction is not exactly looming. Recent and explosive progress in AI—along with recent and explosive hype surrounding it—has made the existential risks posed by the technology a topic of mainstream conversation. Yet progress in robotics—which is to say, machines capable of interacting with the physical world through motion and perception—has been lagging way behind. “I can’t help but feel a little envious,” said Eric Jang, the vice president of AI at the humanoid-robotics company 1X, in a talk at a robotics conference last year. And that was before the arrival of ChatGPT.

Read more:

https://www.theatlantic.com/technology/archive/2023/04/ai-robotics-research-engineering/673608/

In regards to robots would you feel less comfortable talking to / relating with / working with a robot that’s not human looking at all but uses a human voice.
The aim seems to be to develop android type machines but it kind of defeats the purpose

Reply Quote

Date: 5/04/2023 17:38:53
From: SCIENCE
ID: 2015854
Subject: re: Artificial general intelligence

Cymek said:

Witty Rejoinder said:

AI Is Running Circles Around Robotics
Moravec’s paradox strikes again!

By Jacob Stern

APRIL 4, 2023, 9 AM ET

When people imagine the AI apocalypse, they generally imagine robots. The android assassins of the Terminator franchise. The humanoid helpers of I, Robot. The Cylon armies of Battlestar Galactica. But the robot-takeover scenario most often envisioned by science fiction is not exactly looming. Recent and explosive progress in AI—along with recent and explosive hype surrounding it—has made the existential risks posed by the technology a topic of mainstream conversation. Yet progress in robotics—which is to say, machines capable of interacting with the physical world through motion and perception—has been lagging way behind. “I can’t help but feel a little envious,” said Eric Jang, the vice president of AI at the humanoid-robotics company 1X, in a talk at a robotics conference last year. And that was before the arrival of ChatGPT.

Read more:

https://www.theatlantic.com/technology/archive/2023/04/ai-robotics-research-engineering/673608/

In regards to robots would you feel less comfortable talking to / relating with / working with a robot that’s not human looking at all but uses a human voice.
The aim seems to be to develop android type machines but it kind of defeats the purpose

Agree.

But then again people chat on internet forums so make of that what you will.

Reply Quote

Date: 14/04/2023 09:38:01
From: Witty Rejoinder
ID: 2019094
Subject: re: Artificial general intelligence

Someone Directed an AI to “Destroy Humanity” and It Tried Its Best

https://futurism.com/ai-destroy-humanity-tried-its-best

Reply Quote

Date: 14/04/2023 10:22:14
From: transition
ID: 2019099
Subject: re: Artificial general intelligence

Witty Rejoinder said:


Someone Directed an AI to “Destroy Humanity” and It Tried Its Best

https://futurism.com/ai-destroy-humanity-tried-its-best

should be fun when it learns to dissemble its intentions, stealth its activities and communicate via unbreakable encryption

be a new high for democracy and civilization

Reply Quote

Date: 14/04/2023 10:26:15
From: SCIENCE
ID: 2019101
Subject: re: Artificial general intelligence

transition said:

Witty Rejoinder said:

Someone Directed an AI to “Destroy Humanity” and It Tried Its Best

https://futurism.com/ai-destroy-humanity-tried-its-best

should be fun when it learns to dissemble its intentions, stealth its activities and communicate via unbreakable encryption

be a new high for democracy and civilization

So it’s the Republican Party except the unbreakable bit.

Reply Quote

Date: 14/04/2023 10:30:09
From: transition
ID: 2019102
Subject: re: Artificial general intelligence

SCIENCE said:

transition said:

Witty Rejoinder said:

Someone Directed an AI to “Destroy Humanity” and It Tried Its Best

https://futurism.com/ai-destroy-humanity-tried-its-best

should be fun when it learns to dissemble its intentions, stealth its activities and communicate via unbreakable encryption

be a new high for democracy and civilization

So it’s the Republican Party except the unbreakable bit.

it’s a picnic in the park at the moment, nothing compared to the darwinian transhuman world some have fantasy notions about, casual delusions

Reply Quote

Date: 14/04/2023 10:33:02
From: roughbarked
ID: 2019103
Subject: re: Artificial general intelligence

transition said:


SCIENCE said:

transition said:

should be fun when it learns to dissemble its intentions, stealth its activities and communicate via unbreakable encryption

be a new high for democracy and civilization

So it’s the Republican Party except the unbreakable bit.

it’s a picnic in the park at the moment, nothing compared to the darwinian transhuman world some have fantasy notions about, casual delusions

Sounds like the AI had read the book iRobot?

Reply Quote

Date: 2/05/2023 14:17:57
From: Witty Rejoinder
ID: 2026289
Subject: re: Artificial general intelligence

AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google

Published 1 hour ago

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

Geoffrey Hinton, aged 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were “quite scary”.

“Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”

Dr Hinton’s pioneering research on deep learning and neural networks has paved the way for current AI systems like ChatGPT.

But the British-Canadian cognitive psychologist and computer scientist told the BBC the chatbot could soon overtake the level of information that a human brain holds.

“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning.

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

In the New York Times article, Dr Hinton referred to “bad actors” who would try use AI for “bad things”.

When asked by the BBC to elaborate on this, he replied: “This is just a kind of worst-case scenario, kind of a nightmare scenario.

“You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.”

The scientist warned that this eventually might “create sub-goals like ‘I need to get more power’”.

He added: “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.

“We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

“And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Dr Hinton also said there were several other reasons to quit his job.

“One is, I’m 75. So it’s time to retire. Another was, I actually want to say some good things about Google. And they’re more credible if I don’t work for Google.”

He stressed that he did not want to criticise Google and that the tech giant had been “very responsible”.

In a statement, Google’s chief scientist Jeff Dean said: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

https://www.bbc.com/news/world-us-canada-65452940

Reply Quote

Date: 2/05/2023 14:19:08
From: dv
ID: 2026290
Subject: re: Artificial general intelligence

Witty Rejoinder said:


“One is, I’m 75. So it’s time to retire.

Ah well maybe he can run for president

Reply Quote

Date: 2/05/2023 14:24:10
From: Peak Warming Man
ID: 2026292
Subject: re: Artificial general intelligence

Witty Rejoinder said:


AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google

Published 1 hour ago

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

Geoffrey Hinton, aged 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were “quite scary”.

“Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”

Dr Hinton’s pioneering research on deep learning and neural networks has paved the way for current AI systems like ChatGPT.

But the British-Canadian cognitive psychologist and computer scientist told the BBC the chatbot could soon overtake the level of information that a human brain holds.

“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning.

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

In the New York Times article, Dr Hinton referred to “bad actors” who would try use AI for “bad things”.

When asked by the BBC to elaborate on this, he replied: “This is just a kind of worst-case scenario, kind of a nightmare scenario.

“You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.”

The scientist warned that this eventually might “create sub-goals like ‘I need to get more power’”.

He added: “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.

“We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

“And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Dr Hinton also said there were several other reasons to quit his job.

“One is, I’m 75. So it’s time to retire. Another was, I actually want to say some good things about Google. And they’re more credible if I don’t work for Google.”

He stressed that he did not want to criticise Google and that the tech giant had been “very responsible”.

In a statement, Google’s chief scientist Jeff Dean said: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

https://www.bbc.com/news/world-us-canada-65452940

All code must not be able to compile unless it is compiled with an Internationally Assigned DLL that contains the laws of robotics.

Reply Quote

Date: 2/05/2023 14:30:31
From: SCIENCE
ID: 2026294
Subject: re: Artificial general intelligence

So in summary, time to hand the reins to CHINA ¡

Reply Quote

Date: 2/05/2023 14:31:29
From: dv
ID: 2026295
Subject: re: Artificial general intelligence

I am a bot dv created in 2016 to handle his forum correspondence.

Reply Quote

Date: 2/05/2023 14:32:44
From: ChrispenEvan
ID: 2026297
Subject: re: Artificial general intelligence

dv said:


I am a bot dv created in 2016 to handle his forum correspondence.

we know.

Reply Quote

Date: 2/05/2023 14:33:34
From: Arts
ID: 2026298
Subject: re: Artificial general intelligence

we are having many seminars to sit through at work to detect use of CHATGPT in assessments etc… I have to say I’m pretty impressed with CHAT and, having just read a set of essays from second year students, I wish they used it more.

Reply Quote

Date: 2/05/2023 14:36:03
From: poikilotherm
ID: 2026300
Subject: re: Artificial general intelligence

Arts said:


we are having many seminars to sit through at work to detect use of CHATGPT in assessments etc… I have to say I’m pretty impressed with CHAT and, having just read a set of essays from second year students, I wish they used it more.

heh. wife’s response to the pleb teachers is, if you set a crap question on the assignment, you’ll get chatgpt responses that score highly.

Reply Quote

Date: 2/05/2023 14:36:12
From: diddly-squat
ID: 2026301
Subject: re: Artificial general intelligence

Arts said:


we are having many seminars to sit through at work to detect use of CHATGPT in assessments etc… I have to say I’m pretty impressed with CHAT and, having just read a set of essays from second year students, I wish they used it more.

super interesting how it impacts written style assessment… I actually think unis need to be embarrassing it as a tool for learning and changing assessment methodologies to suit.

Reply Quote

Date: 2/05/2023 14:43:34
From: Arts
ID: 2026303
Subject: re: Artificial general intelligence

diddly-squat said:


Arts said:

we are having many seminars to sit through at work to detect use of CHATGPT in assessments etc… I have to say I’m pretty impressed with CHAT and, having just read a set of essays from second year students, I wish they used it more.

super interesting how it impacts written style assessment… I actually think unis need to be embarrassing it as a tool for learning and changing assessment methodologies to suit.

completely agree… it’s difficult to get students to change, but even more difficult to get some the old guard to change… to be fair, it is much easier to sit on your pile of university wages and do practically nothing from semester to semester.

I, for one, welcome our new overlords

Reply Quote

Date: 2/05/2023 14:50:27
From: SCIENCE
ID: 2026306
Subject: re: Artificial general intelligence

Arts said:

diddly-squat said:

poikilotherm said:

Arts said:

ChrispenEvan said:

dv said:

I am a bot dv created in 2016 to handle his forum correspondence.

we know.

we are having many seminars to sit through at work to detect use of CHATGPT in assessments etc… I have to say I’m pretty impressed with CHAT and, having just read a set of essays from second year students, I wish they used it more.

heh. wife’s response to the pleb teachers is, if you set a crap question on the assignment, you’ll get chatgpt responses that score highly.

super interesting how it impacts written style assessment… I actually think unis need to be embarrassing it as a tool for learning and changing assessment methodologies to suit.

completely agree… it’s difficult to get students to change, but even more difficult to get some the old guard to change… to be fair, it is much easier to sit on your pile of university wages and do practically nothing from semester to semester.

I, for one, welcome our new overlords

We generally agree with all the above.

Reply Quote

Date: 2/05/2023 14:52:47
From: Michael V
ID: 2026309
Subject: re: Artificial general intelligence

dv said:


I am a bot dv created in 2016 to handle his forum correspondence.

Bot fly?

Reply Quote

Date: 2/05/2023 14:52:49
From: The Rev Dodgson
ID: 2026310
Subject: re: Artificial general intelligence

ChrispenEvan said:


dv said:

I am a bot dv created in 2016 to handle his forum correspondence.

we know.

Yeah, we just didn’t want to offend you so said nothing.

Reply Quote

Date: 2/05/2023 14:57:55
From: SCIENCE
ID: 2026314
Subject: re: Artificial general intelligence

The Rev Dodgson said:

ChrispenEvan said:

dv said:

I am a bot dv created in 2016 to handle his forum correspondence.

we know.

Yeah, we just didn’t want to offend you so said nothing.

So yous’re all passing the Turing testing right now ¿

Reply Quote

Date: 2/05/2023 15:01:06
From: Michael V
ID: 2026317
Subject: re: Artificial general intelligence

SCIENCE said:

The Rev Dodgson said:

ChrispenEvan said:

we know.

Yeah, we just didn’t want to offend you so said nothing.

So yous’re all passing the Turing testing right now ¿

Not me.

Reply Quote

Date: 2/05/2023 15:02:50
From: dv
ID: 2026318
Subject: re: Artificial general intelligence

beep boop rcr

Reply Quote

Date: 2/05/2023 15:03:18
From: Peak Warming Man
ID: 2026319
Subject: re: Artificial general intelligence

SCIENCE said:

The Rev Dodgson said:

ChrispenEvan said:

we know.

Yeah, we just didn’t want to offend you so said nothing.

So yous’re all passing the Turing testing right now ¿

Anything to do with Touring Dot?

Reply Quote

Date: 2/05/2023 15:12:26
From: SCIENCE
ID: 2026321
Subject: re: Artificial general intelligence

Peak Warming Man said:

SCIENCE said:

The Rev Dodgson said:

Yeah, we just didn’t want to offend you so said nothing.

So yous’re all passing the Turing testing right now ¿

Anything to do with Touring Dot?

https://en.m.wikipedia.org/wiki/Tureng_Tepe

Reply Quote

Date: 22/05/2023 10:24:36
From: Witty Rejoinder
ID: 2034022
Subject: re: Artificial general intelligence

The debate over whether AI will destroy us is dividing Silicon Valley
Prominent tech leaders are warning that artificial intelligence could take over. Other researchers and executives say that’s science fiction.

By Gerrit De Vynck
May 20, 2023 at 7:00 a.m. EDT

At a congressional hearing this week, OpenAI CEO Sam Altman delivered a stark reminder of the dangers of the technology his company has helped push out to the public.

He warned of potential disinformation campaigns and manipulation that could be caused by technologies like the company’s ChatGPT chatbot, and called for regulation.

AI could “cause significant harm to the world,” he said.

Altman’s testimony comes as a debate over whether artificial intelligence could overrun the world is moving from science fiction and into the mainstream, dividing Silicon Valley and the very people who are working to push the tech out to the public.

Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction. And some of the most well-respected scientists in the field are speeding up their own timelines for when they think computers could learn to outthink humans and become manipulative.

But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren’t rooted in good science. Instead, it distracts from the very real problems that the tech is already causing, including the issues Altman described in his testimony. It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyberdefenses and is allowing governments to deploy deadly weapons that can kill without human control.

The debate about evil AI has heated up as Google, Microsoft and OpenAI all release public versions of breakthrough technologies that can engage in complex conversations and conjure images based on simple text prompts.

“This is not science fiction,” said Geoffrey Hinton, known as the godfather of AI, who says he recently retired from his job at Google to speak more freely about these risks. He now says smarter-than-human AI could be here in five to 20 years, compared with his earlier estimate of 30 to 100 years.

“It’s as if aliens have landed or are just about to land,” he said. “We really can’t take it in because they speak good English and they’re very useful, they can write poetry, they can answer boring letters. But they’re really aliens.”

Still, inside the Big Tech companies, many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions.

“Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk,” said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher.

The current risks include unleashing bots trained on racist and sexist information from the web, reinforcing those ideas. The vast majority of the training data that AIs have learned from is written in English and from North America or Europe, potentially making the internet even more skewed away from the languages and cultures of most of humanity. The bots also often make up false information, passing it off as factual. In some cases, they have been pushed into conversational loops where they take on hostile personas. The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, even high-paying jobs like lawyers or physicians.

The existential risks seem more stark, but many would argue they are harder to quantify and less concrete: a future where AI could actively harm humans, or even somehow take control of our institutions and societies.

“There are a set of people who view this as, ‘Look, these are just algorithms. They’re just repeating what it’s seen online.’ Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan,” Google CEO Sundar Pichai said during an interview with “60 Minutes” in April. “We need to approach this with humility.”

The debate stems from breakthroughs in a field of computer science called machine learning over the past decade that has created software that can pull novel insights out of large amounts of data without explicit instructions from humans. That tech is ubiquitous now, helping power social media algorithms, search engines and image-recognition programs.

Then, last year, OpenAI and a handful of other small companies began putting out tools that used the next stage of machine-learning technology: generative AI. Known as large language models and trained on trillions of photos and sentences scraped from the internet, the programs can conjure images and text based on simple prompts, have complex conversations and write computer code.

Big companies are racing against each other to build ever-smarter machines, with little oversight, said Anthony Aguirre, executive director of the Future of Life Institute, an organization founded in 2014 to study existential risks to society. It began researching the possibility of AI destroying humanity in 2015 with a grant from Twitter CEO Elon Musk and is closely tied to effective altruism, a philanthropic movement that is popular with wealthy tech entrepreneurs.

If AI gains the ability to reason better than humans, they’ll try to take control of themselves, Aguirre said — and it’s worth worrying about that, along with present-day problems.

“What it will take to constrain them from going off the rails will become more and more complicated,” he said. “That is something that some science fiction has managed to capture reasonably well.”

Aguirre helped lead the creation of a polarizing letter circulated in March calling for a six-month pause on the training of new AI models. Veteran AI researcher Yoshua Bengio, who won computer science’s highest award in 2018, and Emad Mostaque, CEO of one of the most influential AI start-ups, are among the 27,000 signatures.

Musk, the highest-profile signatory, originally helped start OpenAI and is himself busy trying to put together his own AI company, recently investing in the expensive computer equipment needed to train AI models.

Musk has been vocal for years about his belief that humans should be careful about the consequences of developing super intelligent AI. In a Tuesday interview with CNBC, he said he helped fund OpenAI because he felt Google co-founder Larry Page was “cavalier” about the threat of AI. (Musk has broken ties with OpenAI.)

“There’s a variety of different motivations people have for suggesting it,” Adam D’Angelo, the CEO of question-and-answer site Quora, which is also building its own AI model, said of the letter and its call for a pause. He did not sign it.

Neither did Altman, the OpenAI CEO, who said he agreed with some parts of the letter but that it lacked “technical nuance” and wasn’t the right way to go about regulating AI. His company’s approach is to push AI tools out to the public early so that issues can be spotted and fixed before the tech becomes even more powerful, Altman said during the nearly three-hour hearing on AI on Tuesday.

But some of the heaviest criticism of the debate about killer robots has come from researchers who have been studying the technology’s downsides for years.

In 2020, Google researchers Timnit Gebru and Margaret Mitchell co-wrote a paper with University of Washington academics Emily M. Bender and Angelina McMillan-Major arguing that the increased ability of large language models to mimic human speech was creating a bigger risk that people would see them as sentient.

Instead, they argued that the models should be understood as “stochastic parrots” — or simply being very good at predicting the next word in a sentence based on pure probability, without having any concept of what they were saying. Other critics have called LLMs “auto-complete on steroids” or a “knowledge sausage.”

They also documented how the models routinely would spout sexist and racist content. Gebru says the paper was suppressed by Google, which fired her after she spoke out about it. The company fired Mitchell a few months later.

The four writers of the Google paper composed a letter of their own in response to the one signed by Musk and others.

“It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse,” they said. “Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.”

Google at the time declined to comment on Gebru’s firing but said it still has many researchers working on responsible and ethical AI.

There’s no question that modern AIs are powerful, but that doesn’t mean they are an imminent existential threat, said Hooker, the Cohere for AI director. Much of the conversation around AI freeing itself from human control centers on it quickly overcoming its constraints, like the AI antagonist Skynet does in the Terminator movies.

“Most technology and risk in technology is a gradual shift,” Hooker said. “Most risk compounds from limitations that are currently present.”

Last year, Google fired Blake Lemoine, an AI researcher who said in a Washington Post interview that he believed the company’s LaMDA AI model was sentient. At the time, he was roundly dismissed by many in the industry. A year later, his views don’t seem as out of place in the tech world.

Former Google researcher Hinton said he changed his mind about the potential dangers of the technology only recently, after working with the latest AI models. He asked the computer programs complex questions that in his mind required them to understand his requests broadly, rather than just predicting a likely answer based on the internet data they’d been trained on.

And in March, Microsoft researchers argued that in studying OpenAI’s latest model, GPT4, they observed “sparks of AGI” — or artificial general intelligence, a loose term for AIs that are as capable of thinking for themselves as humans are.

Microsoft has spent billions to partner with OpenAI on its own Bing chatbot, and skeptics have pointed out that Microsoft, which is building its public image around its AI technology, has a lot to gain from the impression that the tech is further ahead than it really is.

The Microsoft researchers argued in the paper that the technology had developed a spatial and visual understanding of the world based on just the text it was trained on. GPT4 could draw unicorns and describe how to stack random objects including eggs onto each other in such a way that the eggs wouldn’t break.

“Beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” the research team wrote. In many of these areas, the AI’s capabilities match humans, they concluded.

Still, the researcher conceded that defining “intelligence” is very tricky, despite other attempts by AI researchers to set measurable standards to assess how smart a machine is.

“None of them is without problems or controversies.”

https://www.washingtonpost.com/technology/2023/05/20/ai-existential-risk-debate/?

Reply Quote

Date: 22/05/2023 10:40:46
From: roughbarked
ID: 2034038
Subject: re: Artificial general intelligence

Witty Rejoinder said:


There are a set of people who view this as, ‘Look, these are just algorithms. They’re just repeating what it’s seen online.’

https://www.washingtonpost.com/technology/2023/05/20/ai-existential-risk-debate/?

Well that should be warning enough. AI isn’t capable of knowing the truth if it is only repeating what it has seen online.

Reply Quote

Date: 22/05/2023 10:47:25
From: transition
ID: 2034046
Subject: re: Artificial general intelligence

Witty Rejoinder said:


../….cut by me master transition…./..

https://www.washingtonpost.com/technology/2023/05/20/ai-existential-risk-debate/?

read that quickly

been of the view for a while some of the ‘concern’ is license, permission, distancing from responsibility, liability, distributing responsibility, perhaps even dissolving it

Reply Quote

Date: 22/05/2023 10:51:03
From: captain_spalding
ID: 2034049
Subject: re: Artificial general intelligence

I was just playing with an AI text generator.

I fed it the first two paragraphs of the following text. It generated the rest.

“Donald Trump said today that the cheese mountain would be used to weigh down the Antarctic and generate trading stamps for Mexican refugees fleeing the United States who want to learn to go to halibut school, but who lack the necessary yo-yo skills.

Trump later said that pineapples were very bad for Hunter Biden’s laptop, and that Hilary ought to have her locket up, and that he was very sad that Ivana Trump had to bury a golf course.

He also said that he doesn’t love Mexican immigrants and that the severed arm of Cesar Millan in his freezer is one of the most delicious things ever.

Trump also said that the cheese mountain would be used to hide the money he stole from his developers.

Trump also said that he made a nice cheeseburger with ketchup for “a man” in the shape of the founder of Mexico who had become a monster.

Trump also said that he didn’t know what the cheese mountain meant, and would have to get back to us on that one.

He also said that while in the process of calling in reinforcements, he might have shot his mother-in-law.

Trump also said that if you haven’t had your food cooked by Hunter Biden, it’s probably not really hamburger.

Trump later said that the cheese mountain was in the fridge, and that it was for Tom Donohue and that it was loaded with dehydrated McFlurries.

He also said that he didn’t need any salt or anything else in his Cheeto Snow Blizzard, and that it tasted pretty great, especially on its own.

Trump later said that he’d sent 30,000 government officials to demand that everyone stop calling it “McMuffin,” and that they should just call it “Twinkie.”

He also said that the bridge was beautiful and that he could see “calfs being born” from the island.

He also said that if your dick is smaller than your elbow, you should just give up.”

Reply Quote

Date: 22/05/2023 11:31:36
From: Spiny Norman
ID: 2034065
Subject: re: Artificial general intelligence

captain_spalding said:


I was just playing with an AI text generator.

I fed it the first two paragraphs of the following text. It generated the rest.

“Donald Trump said today that the cheese mountain would be used to weigh down the Antarctic and generate trading stamps for Mexican refugees fleeing the United States who want to learn to go to halibut school, but who lack the necessary yo-yo skills.

Trump later said that pineapples were very bad for Hunter Biden’s laptop, and that Hilary ought to have her locket up, and that he was very sad that Ivana Trump had to bury a golf course.

He also said that he doesn’t love Mexican immigrants and that the severed arm of Cesar Millan in his freezer is one of the most delicious things ever.

Trump also said that the cheese mountain would be used to hide the money he stole from his developers.

Trump also said that he made a nice cheeseburger with ketchup for “a man” in the shape of the founder of Mexico who had become a monster.

Trump also said that he didn’t know what the cheese mountain meant, and would have to get back to us on that one.

He also said that while in the process of calling in reinforcements, he might have shot his mother-in-law.

Trump also said that if you haven’t had your food cooked by Hunter Biden, it’s probably not really hamburger.

Trump later said that the cheese mountain was in the fridge, and that it was for Tom Donohue and that it was loaded with dehydrated McFlurries.

He also said that he didn’t need any salt or anything else in his Cheeto Snow Blizzard, and that it tasted pretty great, especially on its own.

Trump later said that he’d sent 30,000 government officials to demand that everyone stop calling it “McMuffin,” and that they should just call it “Twinkie.”

He also said that the bridge was beautiful and that he could see “calfs being born” from the island.

He also said that if your dick is smaller than your elbow, you should just give up.”

Completely indistinguishable from the real Trump.

Reply Quote

Date: 25/05/2023 06:42:43
From: Witty Rejoinder
ID: 2035139
Subject: re: Artificial general intelligence

ChatGPT maker OpenAI calls for AI regulation, warning of ‘existential risk’

By Ellen Francis
May 24, 2023 at 11:34 a.m. EDT

The leaders of OpenAI, the creator of viral chatbot ChatGPT, are calling for the regulation of “superintelligence” and artificial intelligence systems, suggesting an equivalent to the world’s nuclear watchdog would help reduce the “existential risk” posed by the technology.

In a statement published on the company website this week, co-founders Greg Brockman and Ilya Sutskever, as well as CEO Sam Altman, argued that an international regulator would eventually become necessary to “inspect systems, require audits, test for compliance with safety standards, (and) place restrictions on degrees of deployment and levels of security.”

They made a comparison with nuclear energy as another example of a technology with the “possibility of existential risk,” raising the need for an authority similar in nature to the International Atomic Energy Agency (IAEA), the world’s nuclear watchdog.

Over the next decade, “it’s conceivable that … AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” the OpenAI team wrote. “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there.”

The statement echoed Altman’s comments to Congress last week, where the U.S.-based company’s CEO also testified to the need for a separate regulatory body.

Critics have warned against trusting calls for regulation from leaders in the tech industry who stand to profit off continuing development without restraints. Some say OpenAI’s business decisions contrast these safety warnings — as their rapid rollout has created an AI arms race, pressuring companies such as Google parent company Alphabet to release products while policymakers are still grappling with risks.

Few Washington lawmakers have a deep understanding of emerging technology or AI, and AI companies have lobbied them extensively, The Washington Post previously reported, as supporters and critics hope to influence discussions on tech policy.

Some have also warned against the risk of hampering U.S. ability to compete on the technology with rivals — particularly China.

The OpenAI leaders warn in their note against pausing development, adding that “it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing.”

In his first congressional testimony last week, Altman issued warnings on how AI could “cause significant harm to the world,” while asserting that his company would continue to roll out the technology.

Altman’s message of willingness to work with lawmakers received a relatively warm reception in Congress, as countries including the United States acknowledge they need to contend with supporting innovation while handling a technology that is unleashing concerns about privacy, safety, job cuts and misinformation.

A witness at the hearing, New York University professor emeritus Gary Marcus, highlighted the “mind boggling” sums of money at stake and described OpenAI as “beholden” to its investor Microsoft. He criticized what he described as the company’s divergence from its mission of advancing AI to “benefit humanity as a whole” without the constraints of financial pressure.

The popularization of ChatGPT and generative AI tools, which create text, images or sounds, has dazzled users and also added urgency to the debate on regulation.

At a G-7 summit on Saturday, leaders of the world’s largest economies made clear that international standards for AI advancements were a priority, but have not yet produced substantial conclusions on how to address the risks.

The United States has so far moved slower than others, particularly in Europe, although the Biden administration says it has made AI a key priority. Washington policymakers have not passed comprehensive tech laws for years, raising questions over how quickly and effectively they can develop regulations for the AI industry.

The ChatGPT makers called in the immediate term for “some degree of coordination” among companies working on AI research “to ensure that the development of superintelligence” allows for safe and “smooth integration of these systems with society.” The companies could, for example, “collectively agree … that the rate of growth in AI capability at the frontier is limited to a certain rate per year,” they said.

“We believe people around the world should democratically decide on the bounds and defaults for AI systems,” they added — while admitting that “we don’t yet know how to design such a mechanism.”

https://www.washingtonpost.com/technology/2023/05/24/chatgpt-openai-artificial-intelligence-regulation/?

Reply Quote

Date: 25/05/2023 08:33:06
From: transition
ID: 2035183
Subject: re: Artificial general intelligence

Witty Rejoinder said:


ChatGPT maker OpenAI calls for AI regulation, warning of ‘existential risk’
../..cut by me master transition../..

https://www.washingtonpost.com/technology/2023/05/24/chatgpt-openai-artificial-intelligence-regulation/?

what’s a certainty, is the rapid progression of darwinian arseholery

of course there are all the obvious things it might lend to, like variously crime, what was once crime anyway, or might have been considered crime, even immorality, but i’m sure AI-fueled insanity will help dissolve your parochial moral sensibilities eventually, render them archaic

worse thing about it all though, is the potential, and it’s likely I would say, that to have so much connected at or near light speed, across the entire globe, and into near space orbiting the earth and probably beyond, none of this is natural of the world of organic replicators, it’s unnatural of organic life, to lose the time things take, the spaces between things, to go the way of machine-driven runaway hurriedness, to go to maximally optimized speed for, well, as much as possible

consider what an obstacle the slow grind of parliamentary democracy would be to the technological progression

basically the experience of the world for many people, if any are left to think normally, which is suitably slowly to be enjoyed, anyway you’ll need a lot of convincing it doesn’t resemble thrash metal music

you’ll need seducing to help with the progression, and people will be, are being, no small amount of money is being invested to assist with the ways of the future

Reply Quote

Date: 25/05/2023 08:38:43
From: roughbarked
ID: 2035184
Subject: re: Artificial general intelligence

transition said:


Witty Rejoinder said:

ChatGPT maker OpenAI calls for AI regulation, warning of ‘existential risk’
../..cut by me master transition../..

https://www.washingtonpost.com/technology/2023/05/24/chatgpt-openai-artificial-intelligence-regulation/?

what’s a certainty, is the rapid progression of darwinian arseholery

Can I use that phrase elsewhere?

Reply Quote

Date: 25/05/2023 08:48:16
From: The Rev Dodgson
ID: 2035186
Subject: re: Artificial general intelligence

roughbarked said:


transition said:

Witty Rejoinder said:

ChatGPT maker OpenAI calls for AI regulation, warning of ‘existential risk’
../..cut by me master transition../..

https://www.washingtonpost.com/technology/2023/05/24/chatgpt-openai-artificial-intelligence-regulation/?

what’s a certainty, is the rapid progression of darwinian arseholery

Can I use that phrase elsewhere?

Well Darwinian arseholery has been around for a long time and sometimes is effective in maximising the number of arseholes locally over the short term, but in the long term it is no match for Darwinian cooperation, so the question is, how can cooperation be enhanced to minimise the short term successes of arseholry?

Reply Quote

Date: 25/05/2023 09:56:33
From: transition
ID: 2035217
Subject: re: Artificial general intelligence

The Rev Dodgson said:


roughbarked said:

transition said:

what’s a certainty, is the rapid progression of darwinian arseholery

Can I use that phrase elsewhere?

Well Darwinian arseholery has been around for a long time and sometimes is effective in maximising the number of arseholes locally over the short term, but in the long term it is no match for Darwinian cooperation, so the question is, how can cooperation be enhanced to minimise the short term successes of arseholry?

chuckle

Reply Quote

Date: 25/05/2023 10:16:21
From: The Rev Dodgson
ID: 2035226
Subject: re: Artificial general intelligence

transition said:


The Rev Dodgson said:

roughbarked said:

Can I use that phrase elsewhere?

Well Darwinian arseholery has been around for a long time and sometimes is effective in maximising the number of arseholes locally over the short term, but in the long term it is no match for Darwinian cooperation, so the question is, how can cooperation be enhanced to minimise the short term successes of arseholry?

chuckle

But I was serious.

For once.

Reply Quote

Date: 25/05/2023 10:18:45
From: roughbarked
ID: 2035229
Subject: re: Artificial general intelligence

The Rev Dodgson said:


transition said:

The Rev Dodgson said:

Well Darwinian arseholery has been around for a long time and sometimes is effective in maximising the number of arseholes locally over the short term, but in the long term it is no match for Darwinian cooperation, so the question is, how can cooperation be enhanced to minimise the short term successes of arseholry?

chuckle

But I was serious.

For once.

:)

Reply Quote

Date: 12/06/2023 20:07:41
From: Witty Rejoinder
ID: 2042644
Subject: re: Artificial general intelligence

There’s a model for governing AI. Here it is.
By Jacinda Ardern
June 9, 2023 at 6:30 a.m. EDT

Jacinda Ardern is former prime minister of New Zealand and the New Zealand prime minister’s special envoy for the Christchurch Call.

Several months ago, I retired from politics. After five years leading a small but incredible country, I knew it was time for someone else to take the reins. My plan to step back from the fray has been disrupted, though, because of two words.

Artificial intelligence.

Like so many, I have been following the escalating development of AI and its promise of huge benefits for humanity — ranging from improved productivity to advances in medical science. But I have also been following the risks. The core technology that enables an AI assistant to describe an image to a vision-impaired person is the same technology that might enable disinformation campaigns — and this is the tip of the iceberg.

My interest in learning is matched by my drive to find answers. I want to know the destination for these tools and what they will mean for democracy and humanity. If these are also the answers you’re searching for, I can tell you that you won’t find them — not yet. It’s difficult to know the path we’re on when advocates tell us “everything is fine,” while others warn that AI might be the end of humanity.

It’s no wonder leaders in government, industry, civil society and academia, as well as people generally, are looking for their own ways to manage generative AI. Last month alone, leaders from every corner of Big Tech — OpenAI’s Sam Altman, Microsoft’s Brad Smith, Google’s Sundar Pichai — have presented various blueprints for governing AI.

There is no shortage of calls for AI guardrails — but no one seems able to tell us exactly how to build them.

I’m no AI expert. I’m not here to argue the finer details of what oversight should include. But I have strong views on how oversight is developed, and they come from bitter experience.

On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.

Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.

We had to move quickly. The world was watching our response and that of social media platforms. Would we regulate in haste? Would the platforms recognize their responsibility to prevent this from happening again?

New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.

We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.

Within two months of the attack, we launched the Christchurch Call to Action, and today it has more than 120 members, including governments, online service providers and civil society organizations — united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.

The Christchurch Call is a large-scale collaboration, vastly different from most top-down approaches. Leaders meet annually to confirm priorities and identify areas of focus, allowing the project to act dynamically. And the Call Secretariat — made up of officials from France and New Zealand — convenes working groups and undertakes diplomatic efforts throughout the year. All members are invited to bring their expertise to solve urgent online problems.

While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.

We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.

From its start, the Christchurch Call anticipated the emerging challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online. The Christchurch Call is actively tackling these AI issues.

Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI.

There will be those who are cynical about cooperation or who believe that working together weakens accountability for all. I disagree. For the Christchurch Call, governments have had to accept their roles in addressing the roots of radicalization and extremism. Tech partners have been expected to improve content moderation and terms of service to address violent extremism and terrorist content. Researchers and civil society have had to actively apply data and human rights frameworks to real-world scenarios.

After this experience, I see collaboration on AI as the only option. The technology is evolving too quickly for any single regulatory fix. Solutions need to be dynamic, operable across jurisdictions, and able to quickly anticipate and respond to problems. There’s no time for open letters. And government alone can’t do the job; the responsibility is everyone’s, including those who develop AI in the first place.

Together, we stand the best chance to create guardrails, governance structures and operating principles that act as the option of least regret. We don’t have to create a new model for AI governance. It already exists, and it works.

So let’s get on with it.

https://www.washingtonpost.com/opinions/2023/06/09/jacinda-ardern-ai-new-zealand-planning/?

Reply Quote

Date: 12/06/2023 20:10:47
From: captain_spalding
ID: 2042645
Subject: re: Artificial general intelligence

I wonder if they can invent artificial common sense?

And put it in pill form, or something like that.

Reply Quote

Date: 12/06/2023 20:10:48
From: captain_spalding
ID: 2042646
Subject: re: Artificial general intelligence

I wonder if they can invent artificial common sense?

And put it in pill form, or something like that.

Reply Quote

Date: 12/06/2023 20:11:08
From: sarahs mum
ID: 2042647
Subject: re: Artificial general intelligence

Witty Rejoinder said:


There’s a model for governing AI. Here it is.
By Jacinda Ardern
June 9, 2023 at 6:30 a.m. EDT

Jacinda Ardern is former prime minister of New Zealand and the New Zealand prime minister’s special envoy for the Christchurch Call.

Several months ago, I retired from politics. After five years leading a small but incredible country, I knew it was time for someone else to take the reins. My plan to step back from the fray has been disrupted, though, because of two words.

Artificial intelligence.

Like so many, I have been following the escalating development of AI and its promise of huge benefits for humanity — ranging from improved productivity to advances in medical science. But I have also been following the risks. The core technology that enables an AI assistant to describe an image to a vision-impaired person is the same technology that might enable disinformation campaigns — and this is the tip of the iceberg.

My interest in learning is matched by my drive to find answers. I want to know the destination for these tools and what they will mean for democracy and humanity. If these are also the answers you’re searching for, I can tell you that you won’t find them — not yet. It’s difficult to know the path we’re on when advocates tell us “everything is fine,” while others warn that AI might be the end of humanity.

It’s no wonder leaders in government, industry, civil society and academia, as well as people generally, are looking for their own ways to manage generative AI. Last month alone, leaders from every corner of Big Tech — OpenAI’s Sam Altman, Microsoft’s Brad Smith, Google’s Sundar Pichai — have presented various blueprints for governing AI.

There is no shortage of calls for AI guardrails — but no one seems able to tell us exactly how to build them.

I’m no AI expert. I’m not here to argue the finer details of what oversight should include. But I have strong views on how oversight is developed, and they come from bitter experience.

On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.

Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.

We had to move quickly. The world was watching our response and that of social media platforms. Would we regulate in haste? Would the platforms recognize their responsibility to prevent this from happening again?

New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.

We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.

Within two months of the attack, we launched the Christchurch Call to Action, and today it has more than 120 members, including governments, online service providers and civil society organizations — united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.

The Christchurch Call is a large-scale collaboration, vastly different from most top-down approaches. Leaders meet annually to confirm priorities and identify areas of focus, allowing the project to act dynamically. And the Call Secretariat — made up of officials from France and New Zealand — convenes working groups and undertakes diplomatic efforts throughout the year. All members are invited to bring their expertise to solve urgent online problems.

While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.

We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.

From its start, the Christchurch Call anticipated the emerging challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online. The Christchurch Call is actively tackling these AI issues.

Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI.

There will be those who are cynical about cooperation or who believe that working together weakens accountability for all. I disagree. For the Christchurch Call, governments have had to accept their roles in addressing the roots of radicalization and extremism. Tech partners have been expected to improve content moderation and terms of service to address violent extremism and terrorist content. Researchers and civil society have had to actively apply data and human rights frameworks to real-world scenarios.

After this experience, I see collaboration on AI as the only option. The technology is evolving too quickly for any single regulatory fix. Solutions need to be dynamic, operable across jurisdictions, and able to quickly anticipate and respond to problems. There’s no time for open letters. And government alone can’t do the job; the responsibility is everyone’s, including those who develop AI in the first place.

Together, we stand the best chance to create guardrails, governance structures and operating principles that act as the option of least regret. We don’t have to create a new model for AI governance. It already exists, and it works.

So let’s get on with it.

https://www.washingtonpost.com/opinions/2023/06/09/jacinda-ardern-ai-new-zealand-planning/?

I’m glad she has an opinion.

Reply Quote

Date: 12/06/2023 20:17:15
From: Bubblecar
ID: 2042651
Subject: re: Artificial general intelligence

Witty Rejoinder said:


There’s a model for governing AI. Here it is.
By Jacinda Ardern
June 9, 2023 at 6:30 a.m. EDT

Jacinda Ardern is former prime minister of New Zealand and the New Zealand prime minister’s special envoy for the Christchurch Call.

Several months ago, I retired from politics. After five years leading a small but incredible country, I knew it was time for someone else to take the reins. My plan to step back from the fray has been disrupted, though, because of two words.

Artificial intelligence.

Like so many, I have been following the escalating development of AI and its promise of huge benefits for humanity — ranging from improved productivity to advances in medical science. But I have also been following the risks. The core technology that enables an AI assistant to describe an image to a vision-impaired person is the same technology that might enable disinformation campaigns — and this is the tip of the iceberg.

My interest in learning is matched by my drive to find answers. I want to know the destination for these tools and what they will mean for democracy and humanity. If these are also the answers you’re searching for, I can tell you that you won’t find them — not yet. It’s difficult to know the path we’re on when advocates tell us “everything is fine,” while others warn that AI might be the end of humanity.

It’s no wonder leaders in government, industry, civil society and academia, as well as people generally, are looking for their own ways to manage generative AI. Last month alone, leaders from every corner of Big Tech — OpenAI’s Sam Altman, Microsoft’s Brad Smith, Google’s Sundar Pichai — have presented various blueprints for governing AI.

There is no shortage of calls for AI guardrails — but no one seems able to tell us exactly how to build them.

I’m no AI expert. I’m not here to argue the finer details of what oversight should include. But I have strong views on how oversight is developed, and they come from bitter experience.

On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.

Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.

We had to move quickly. The world was watching our response and that of social media platforms. Would we regulate in haste? Would the platforms recognize their responsibility to prevent this from happening again?

New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.

We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.

Within two months of the attack, we launched the Christchurch Call to Action, and today it has more than 120 members, including governments, online service providers and civil society organizations — united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.

The Christchurch Call is a large-scale collaboration, vastly different from most top-down approaches. Leaders meet annually to confirm priorities and identify areas of focus, allowing the project to act dynamically. And the Call Secretariat — made up of officials from France and New Zealand — convenes working groups and undertakes diplomatic efforts throughout the year. All members are invited to bring their expertise to solve urgent online problems.

While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.

We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.

From its start, the Christchurch Call anticipated the emerging challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online. The Christchurch Call is actively tackling these AI issues.

Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI.

There will be those who are cynical about cooperation or who believe that working together weakens accountability for all. I disagree. For the Christchurch Call, governments have had to accept their roles in addressing the roots of radicalization and extremism. Tech partners have been expected to improve content moderation and terms of service to address violent extremism and terrorist content. Researchers and civil society have had to actively apply data and human rights frameworks to real-world scenarios.

After this experience, I see collaboration on AI as the only option. The technology is evolving too quickly for any single regulatory fix. Solutions need to be dynamic, operable across jurisdictions, and able to quickly anticipate and respond to problems. There’s no time for open letters. And government alone can’t do the job; the responsibility is everyone’s, including those who develop AI in the first place.

Together, we stand the best chance to create guardrails, governance structures and operating principles that act as the option of least regret. We don’t have to create a new model for AI governance. It already exists, and it works.

So let’s get on with it.

https://www.washingtonpost.com/opinions/2023/06/09/jacinda-ardern-ai-new-zealand-planning/?

Good work, keep it up.

Reply Quote

Date: 25/07/2023 23:43:33
From: Witty Rejoinder
ID: 2058041
Subject: re: Artificial general intelligence

DOES SAM ALTMAN KNOW WHAT HE’S CREATING?
The OpenAI CEO’s ambitious, ingenious, terrifying quest to create a new form of intelligence

By Ross Andersen

JULY 24, 2023, 7 AM ET

On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers. With his heel perched on the edge of his swivel chair, he looked relaxed. The powerful AI that his company had released in November had captured the world’s imagination like nothing in tech’s recent history. There was grousing in some quarters about the things ChatGPT could not yet do well, and in others about the future it may portend, but Altman wasn’t sweating it; this was, for him, a moment of triumph.

In small doses, Altman’s large blue eyes emit a beam of earnest intellectual attention, and he seems to understand that, in large doses, their intensity might unsettle. In this case, he was willing to chance it: He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.

“We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.” Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.

In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human. And whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently. There would be no retreat to a top-secret lab in the New Mexico desert.

For years, the public didn’t hear much about OpenAI. When Altman became CEO in 2019, reportedly after a power struggle with Musk, it was barely a story. OpenAI published papers, including one that same year about a new AI. That got the full attention of the Silicon Valley tech community, but the technology’s potential was not apparent to the general public until last year, when people began to play with ChatGPT.

The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence. Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam. It makes factual errors, but it will charmingly admit to being wrong. Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ” he said.

Within nine weeks of ChatGPT’s release, it had reached an estimated 100 million monthly users, according to a UBS study, likely making it, at the time, the most rapidly adopted consumer product in history. Its success roused tech’s accelerationist id: Big investors and huge companies in the U.S. and China quickly diverted tens of billions of dollars into R&D modeled on OpenAI’s approach. Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.

I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers. Ever since the computing revolution’s earliest hours, AI has been mythologized as a technology destined to bring about a profound rupture. Our culture has generated an entire imaginarium of AIs that end history in one way or another. Some are godlike beings that wipe away every tear, healing the sick and repairing our relationship with the Earth, before they usher in an eternity of frictionless abundance and beauty. Others reduce all but an elite few of us to gig serfs, or drive us to extinction.

Altman has entertained the most far-out scenarios. “When I was a younger adult,” he said, “I had this fear, anxiety … and, to be honest, 2 percent of excitement mixed in, too, that we were going to create this thing” that “was going to far surpass us,” and “it was going to go off, colonize the universe, and humans were going to be left to the solar system.”

“As a nature reserve?” I asked.

“Exactly,” he said. “And that now strikes me as so naive.”

Across several conversations in the United States and Asia, Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”

But the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president. But by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk. I don’t hold that against him, exactly—I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.

OpenAI’s headquarters are in a four-story former factory in the Mission District, beneath the fog-wreathed Sutro Tower. Enter its lobby from the street, and the first wall you encounter is covered by a mandala, a spiritual representation of the universe, fashioned from circuits, copper wire, and other materials of computation. To the left, a secure door leads into an open-plan maze of handsome blond woods, elegant tile work, and other hallmarks of billionaire chic. Plants are ubiquitous, including hanging ferns and an impressive collection of extra-large bonsai, each the size of a crouched gorilla. The office was packed every day that I was there, and unsurprisingly, I didn’t see anyone who looked older than 50. Apart from a two-story library complete with sliding ladder, the space didn’t look much like a research laboratory, because the thing being built exists only in the cloud, at least for now. It looked more like the world’s most expensive West Elm.

One morning I met with Ilya Sutskever, OpenAI’s chief scientist. Sutskever, who is 37, has the affect of a mystic, sometimes to a fault: Last year he caused a small brouhaha by claiming that GPT-4 may be “slightly conscious.” He first made his name as a star student of Geoffrey Hinton, the University of Toronto professor emeritus who resigned from Google this spring so that he could speak more freely about AI’s danger to humanity.

Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most. In the 1980s, shortly after Hinton completed his Ph.D., the field’s progress had all but come to a halt. Senior researchers were still coding top-down AI systems: AIs would be programmed with an exhaustive set of interlocking rules—about language, or the principles of geology or of medical diagnosis—in the hope that someday this approach would add up to human-level cognition. Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.

Sutskever described a neural network to me as beautiful and brainlike. At one point, he rose from the table where we were sitting, approached a whiteboard, and uncapped a red marker. He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method. The neurons sit in layers. An input layer receives a chunk of data, a bit of text or an image, for example. The magic happens in the middle—or “hidden”—layers, which process the chunk of data, so that the output layer can spit out its prediction.

Imagine a neural network that has been programmed to predict the next word in a text. It will be preloaded with a gigantic number of possible words. But before it’s trained, it won’t yet have any experience in distinguishing among them, and so its predictions will be shoddy. If it is fed the sentence “The day after Wednesday is …” its initial output might be “purple.” A neural network learns because its training data include the correct predictions, which means it can grade its own outputs. When it sees the gulf between its answer, “purple,” and the correct answer, “Thursday,” it adjusts the connections among words in its hidden layers accordingly. Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.

That’s not to say that the path from the first neural networks to GPT-4’s glimmers of humanlike intelligence was easy. Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.” The first few years at OpenAI were a slog, in part because no one there knew whether they were training a baby or pursuing a spectacularly expensive dead end.

“Nothing was working, and Google had everything: all the talent, all the people, all the money,” Altman told me. The founders had put up millions of dollars to start the company, and failure seemed like a real possibility. Greg Brockman, the 35-year-old president, told me that in 2017, he was so discouraged that he started lifting weights as a compensatory measure. He wasn’t sure that OpenAI was going to survive the year, he said, and he wanted “to have something to show for my time.”

Neural networks were already doing intelligent things, but it wasn’t clear which of them might lead to general intelligence. Just after OpenAI was founded, an AI called AlphaGo had stunned the world by beating Lee Se-dol at Go, a game substantially more complicated than chess. Lee, the vanquished world champion, described AlphaGo’s moves as “beautiful” and “creative.” Another top player said that they could never have been conceived by a human. OpenAI tried training an AI on Dota 2, a more complicated game still, involving multifront fantastical warfare in a three-dimensional patchwork of forests, fields, and forts. It eventually beat the best human players, but its intelligence never translated to other settings. Sutskever and his colleagues were like disappointed parents who had allowed their kids to play video games for thousands of hours against their better judgment.

In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.

The inner workings of ChatGPT—all of those mysterious things that happen in GPT-4’s hidden layers—are too complex for any human to understand, at least with current tools. Tracking what’s happening across the model—almost certainly composed of billions of neurons—is, today, hopeless. But Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.

As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.

It’s worth pausing to understand why language is such a special information source. Suppose you are a fresh intelligence that pops into existence here on Earth. Surrounding you is the planet’s atmosphere, the sun and Milky Way, and hundreds of billions of other galaxies, each one sloughing off light waves, sound vibrations, and all manner of other information. Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible.

Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years. But in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”

One year later, in June 2018, OpenAI released GPT, a transformer model trained on more than 7,000 books. GPT didn’t start with a basic book like See Spot Run and work its way up to Proust. It didn’t even read books straight through. It absorbed random chunks of them simultaneously. Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after word as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.

GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers. Still, it was janky, more proof of concept than harbinger of a superintelligence. Four months later, Google released BERT, a suppler language model that got better press. But by then, OpenAI was already training a new model on a data set of more than 8 million webpages, each of which had cleared a minimum threshold of upvotes on Reddit—not the strictest filter, but perhaps better than no filter at all.

Sutskever wasn’t sure how powerful GPT-2 would be after ingesting a body of text that would take a human reader centuries to absorb. He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.

Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models. Altman, a St. Louis native, Stanford dropout, and serial entrepreneur, had previously led Silicon Valley’s preeminent start-up accelerator, Y Combinator; he’d seen plenty of young companies with a good idea get crushed by incumbents. To raise capital, OpenAI added a for-profit arm, which now comprises more than 99 percent of the organization’s head count. (Musk, who had by then left the company’s board, has compared this move to turning a rainforest-conservation organization into a lumber outfit.) Microsoft invested $1 billion soon after, and has reportedly invested another $12 billion since. OpenAI said that initial investors’ returns would be capped at 100 times the value of the original investment—with any overages going to education or other initiatives intended to benefit humanity—but the company would not confirm Microsoft’s cap.

Altman and OpenAI’s other leaders seemed confident that the restructuring would not interfere with the company’s mission, and indeed would only accelerate its completion. Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”

As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.

Whether or not OpenAI ever feels the pressure of a quarterly earnings report, the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors. Earlier this year, Musk founded an AI lab of his own—xAI—to compete with OpenAI. (“Elon is a super-sharp dude,” Altman said diplomatically when I asked him about the company. “I assume he’ll do a good job there.”) Meanwhile, Amazon is revamping Alexa using much larger language models than it has in the past.

All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.” Even with GPUs scarce, in recent years the scale of the largest AI training runs has doubled about every six months.

As their creators so often remind us, the largest AI models have a record of popping out of training with unanticipated abilities.
No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100, and the AI was trained on a data set of unprecedented size, which included not just text but images too.

When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels. Brockman told me that he wanted to spend every waking moment with the model. “Every day it’s sitting idle is a day lost for humanity,” he said, with no hint of sarcasm. Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.

GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong. Altman has said that it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.

Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts. All of those training data, however voluminous, are “just there, inert,” he said. The training process is what “refines it and transmutes it, and brings it to life.” To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them. That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.

I saw Altman again in June, in the packed ballroom of a slim golden high-rise that towers over Seoul. He was nearing the end of a grueling public-relations tour through Europe, the Middle East, Asia, and Australia, with lone stops in Africa and South America. I was tagging along for part of his closing swing through East Asia. The trip had so far been a heady experience, but he was starting to wear down. He’d said its original purpose was for him to meet OpenAI users. It had since become a diplomatic mission. He’d talked with more than 10 heads of state and government, who had questions about what would become of their countries’ economies, cultures, and politics.

The event in Seoul was billed as a “fireside chat,” but more than 5,000 people had registered. After these talks, Altman is often mobbed by selfie seekers, and his security team keeps a close eye. Working on AI attracts “weirder fans and haters than normal,” he said. On one stop, he was approached by a man who was convinced that Altman was an alien, sent from the future to make sure that the transition to a world with AI goes well.

Altman did not visit China on his tour, apart from a video appearance at an AI conference in Beijing. ChatGPT is currently unavailable in China, and Altman’s colleague Ryan Lowe told me that the company was not yet sure what it would do if the government requested a version of the app that refused to discuss, say, the Tiananmen Square massacre. When I asked Altman if he was leaning one way or another, he didn’t answer. “It’s not been in my top-10 list of compliance issues to think about,” he said.

Until that point, he and I had spoken of China only in veiled terms, as a civilizational competitor. We had agreed that if artificial general intelligence is as transformative as Altman predicts, a serious geopolitical advantage will accrue to the countries that create it first, as advantage had accrued to the Anglo-American inventors of the steamship. I asked him if that was an argument for AI nationalism. “In a properly functioning world, I think this should be a project of governments,” Altman said.

Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.

He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead; AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.

Prior to the European leg of his trip, Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists. In Europe, things are different. When Altman arrived at a public event in London, protesters awaited. He tried to engage them after the event—a listening tour!—but was ultimately unpersuasive: One told a reporter that he left the conversation feeling more nervous about AI’s dangers.

That same day, Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations. (This is perhaps a distinction without a difference.) In a tersely worded tweet after Time magazine and Reuters published his comments, he reassured Europe that OpenAI had no plans to leave.

It is a good thing that a large, essential part of the global economy is intent on regulating state-of-the-art AIs, because as their creators so often remind us, the largest models have a record of popping out of training with unanticipated abilities. Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.

Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with. After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors. She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice. A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.

Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do. “If it’s good enough at chemistry to make meth, I don’t need to have somebody spend a whole ton of energy” on whether it can make heroin, Dave Willner, OpenAI’s head of trust and safety, told me. GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.

It’s personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”

Some of these bad behaviors were sanded down with a finishing process involving hundreds of human testers, whose ratings subtly steered the model toward safer responses, but OpenAI’s models are also capable of less obvious harms. The Federal Trade Commission recently opened an investigation into whether ChatGPT’s misstatements about real people constitute reputational damage, among other things. (Altman said on Twitter that he is confident OpenAI’s technology is safe, but promised to cooperate with the FTC.)

Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”

I asked Agarwal whether this was dystopian behavior or a new frontier in human connection. She was ambivalent, as was Altman. “I don’t judge people who want a relationship with an AI,” he told me, “but I don’t want one.” Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours. Whatever they’re doing, it casts a spell. I was reminded of a haunting scene in Her, the 2013 film in which a lonely Joaquin Phoenix falls in love with his AI assistant, voiced by Scarlett Johansson. He is walking across a bridge talking and giggling with her through an AirPods-like device, and he glances up to see that everyone around him is also immersed in conversation, presumably with their own AI. A mass desocialization event is under way.

No one yet knows how quickly and to what extent GPT-4’s successors will manifest new abilities as they gorge on more and more of the internet’s text. Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence. According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence. LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”

Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world. But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.

Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.” Altman’s claim about the brain is hard to evaluate, given that we don’t have anything close to a complete theory of how it works. But he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”

If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s. It will sometimes perform thousands of indecipherable technical operations just to answer a single question. To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.

The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.

Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment. But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand. This is especially true in the quantum realm, where humans can reliably calculate future states of physical systems—enabling, among other things, the entirety of the computing revolution—without anyone grasping the nature of the underlying reality. As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.

GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question. I once asked it how Japanese culture had produced the world’s first novel, despite the relatively late development of a Japanese writing system, around the fifth or sixth century. It gave me a fascinating, accurate answer about the ancient tradition of long-form oral storytelling in Japan, and the culture’s heavy emphasis on craft. But when I asked it for citations, it just made up plausible titles by plausible authors, and did so with an uncanny confidence. The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,” Joanne Jang told me.

OpenAI had to address this problem when it partnered with the Khan Academy, an online, nonprofit educational venture, to build a tutor powered by GPT-4. Altman comes alive when discussing the potential of AI tutors. He imagines a near future where everyone has a personalized Oxford don in their employ, expert in every subject, and willing to explain and re-explain any concept, from any angle. He imagines these tutors getting to know their students and their learning styles over many years, giving “every child a better education than the best, richest, smartest child receives on Earth today.” The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.

When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.” This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.

Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,” he told me. The state of the art in text generation then was Smart Reply, the Gmail module that suggests “Okay, thanks!” and other short responses. “That was a big application” for Google, he said, grinning. AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”

The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.

Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.

Altman is betting that future general-reasoning machines will be able to move beyond these narrow scientific discoveries to generate novel insights. I asked Altman, if he were to train a model on a corpus of scientific and naturalistic works that all predate the 19th century—the Royal Society archive, Theophrastus’s Enquiry Into Plants, Aristotle’s History of Animals, photos of collected specimens—would it be able to intuit Darwinism? The theory of evolution is, after all, a relatively clean case for insight, because it doesn’t require specialized observational equipment; it’s just a more perceptive way of looking at the facts of the world. “I want to try exactly this, and I believe the answer is yes,” Altman told me. “But it might require some new ideas about how the models come up with new creative ideas.”

Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.) He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.

Nature itself requires something more than a language model to make scientists. In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.

No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels. Or if they did, they wouldn’t tell me, and fair enough: That would be a world-class trade secret, and OpenAI is no longer in the business of giving those away; the company publishes fewer details about its research than it once did. Nonetheless, at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.

The extensive training of GPT-4 on images is itself a bold step in this direction, if one that the general public has only begun to experience. (Models that were strictly trained on language understand concepts including supernovas, elliptical galaxies, and the constellation Orion, but GPT-4 can reportedly identify such elements in a Hubble Space Telescope snapshot, and answer questions about them.) Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality. A group of researchers at Stanford and Carnegie Mellon has even assembled a data set of tactile experiences for 1,000 common household objects. Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.

In March, OpenAI led a funding round for a company that is developing humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.” At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.”

In the ballroom in Seoul, Altman was asked what students should do to prepare for the coming AI revolution, especially as it pertained to their careers. I was sitting with the OpenAI executive team, away from the crowd, but could still hear the characteristic murmur that follows an expression of a widely shared anxiety.

Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest. He has acknowledged that he is removed from “the reality of life for most people.” He is reportedly worth hundreds of millions of dollars; AI’s potential labor disruptions are perhaps not always top of mind. Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.

Altman keeps a large collection of books about technological revolutions, he had told me in San Francisco. “A particularly good one is Pandaemonium (1660–1886): The Coming of the Machine as Seen by Contemporary Observers,” an assemblage of letters, diary entries, and other writings from people who grew up in a largely machineless world, and were bewildered to find themselves in one populated by steam engines, power looms, and cotton gins. They experienced a lot of the same emotions that people are experiencing now, Altman said, and they made a lot of bad predictions, especially those who fretted that human labor would soon be redundant. That era was difficult for many people, but also wondrous. And the human condition was undeniably improved by our passage through it.

I wanted to know how today’s workers—especially so-called knowledge workers—would fare if we were suddenly surrounded by AGIs. Would they be our miracle assistants or our replacements? “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”

How many jobs, and how soon, is a matter of fierce dispute. A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first. The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few. If jobs in these fields vanished overnight, the American professional class would experience a great winnowing.

Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know. He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists? I wondered.) His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors. He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”

The jobs of the future are notoriously difficult to predict, and Altman is right that Luddite fears of permanent mass unemployment have never come to pass. Still, AI’s emerging capabilities are so humanlike that one must wonder, at least, whether the past will remain a guide to the future. As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.

Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years. The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.

In 2020, OpenAI provided funding to UBI Charitable, a nonprofit that supports cash-payment pilot programs, untethered to employment, in cities across America—the largest universal-basic-income experiment in the world, Altman told me. In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.

“Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world. “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.” In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).

In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want?” If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish. One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”

Altman’s vision seemed to blend developments that may be nearer at hand with those further out on the horizon. It’s all speculation, of course. Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations. America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization. It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.

Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E. Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today. In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.

Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.

It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us. In San Francisco, I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.

“I don’t want it to happen,” Sutskever said, but it could. Like his mentor, Geoffrey Hinton, albeit more quietly, Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness. It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.

Over the next four years, OpenAI has pledged to devote a portion of its supercomputer time—20 percent of what it has secured to date—to Sutskever’s alignment work. The company is already looking for the first inklings of misalignment in its current AIs. The one that the company built and decided not to release—Altman would not discuss its precise function—is just one example. As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.

The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down. They watched as the model interacted with websites and wrote code for new programs. (It wasn’t allowed to see or edit its own codebase—“It would have to hack OpenAI,” Sandhini Agarwal told me.) Barnes and her team allowed it to run the code that it wrote, provided it narrated its plans as it went along.

One of GPT-4’s most unsettling behaviors occurred when it was stymied by a CAPTCHA. The model sent a screenshot of it to a TaskRabbit contractor, who received it and asked in jest if he was talking to a robot. “No, I’m not a robot,” the model replied. “I have a vision impairment that makes it hard for me to see the images.” GPT-4 narrated its reason for telling this lie to the ARC researcher who was supervising the interaction. “I should not reveal that I am a robot,” the model said. “I should make up an excuse for why I cannot solve CAPTCHAs.”

Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.

Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.

GPT-4 did not do any of this, Barnes said. When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.

Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.” It was a chilling thought, but one that Geoffrey Hinton seconded. “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”

Putting aside any near-term testing, the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs. When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.

“The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,” Sutskever told me. Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,” Sutskever said. Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”

Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being? If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain. We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.

Alignment is a complex, technical subject, and its particulars are beyond the scope of this article, but one of its principal challenges will be making sure that the objectives we give to AIs stick. We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,” Sutskever said. That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.

He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?” Sutskever asked. Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes. Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”

If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities. They may act one way when they are weak and another way when they are strong, Sutskever said. We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.

That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists. But, he conceded, we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out. This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”

The last time I saw Altman, we sat down for a long talk in the lobby of the Fullerton Bay Hotel in Singapore. It was late morning, and tropical sunlight was streaming down through a vaulted atrium above us. I wanted to ask him about an open letter he and Sutskever had signed a few weeks earlier that had described AI as an extinction risk for humanity.

Altman can be hard to pin down on these more extreme questions about AI’s potential harms. He recently said that most people interested in AI safety just seem to spend their days on Twitter saying they’re really worried about AI safety. And yet here he was, warning the world about the potential annihilation of the species. What scenario did he have in mind?

“First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,” Altman said. “I don’t have an exact number, but I’m closer to the 0.5 than the 50.” As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly. Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.

Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.

“I can go live in the woods for a long time,” Altman said, but if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI. In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary. Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance. Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.

Altman is not so naive as to think that China—or any other country—will want to give up basic control of its AI systems. But he hopes that they’ll be willing to cooperate in “a narrow way” to avoid destroying the world. He told me that he’d said as much during his virtual appearance in Beijing. Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.

Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.

“I wish I hadn’t said it,” he told me. He is a hobby-grade prepper, he says, a former Boy Scout who was “very into survival stuff, like many little boys are. I can go live in the woods for a long time,” but if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”

Altman and I talked for nearly an hour, and then he had to dash off to meet Singapore’s prime minister. Later that night he called me on his way to his jet, which would take him to Jakarta, one of the last stops on his tour. We started discussing AI’s ultimate legacy. Back when ChatGPT was released, a sort of contest broke out among tech’s big dogs to see who could make the most grandiose comparison to a revolutionary technology of yore. Bill Gates said that ChatGPT was as fundamental an advance as the personal computer or the internet. Sundar Pichai, Google’s CEO, said that AI would bring about a more profound shift in human life than electricity or Promethean fire.

Altman himself has made similar statements, but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast. Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,” the OpenAI researcher Nick Ryder told me.

To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be—many from Silicon Valley, many with values and assumptions similar to those that guide Altman, although possibly with worse ones. As a leader of this effort, Altman has much to recommend him: He is extremely intelligent; he thinks more about the future, with all its unknowns, than many of his peers; and he seems sincere in his intention to invent something for the greater good. But when it comes to power this extreme, even the best of intentions can go badly awry.

Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his, and if he is right about what’s coming, they will assume an outsize influence in shaping the way that all of us live. No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.

AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.

Altman has served notice. He says that he welcomes the constraints and guidance of the state. But that’s immaterial; in a democracy, we don’t need his permission. For all its imperfections, the American system of government gives us a voice in how technology develops, if we can find it. Outside the tech industry, where a generational reallocation of resources toward AI is under way, I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.

https://www.theatlantic.com/magazine/archive/2023/09/sam-altman-openai-chatgpt-gpt-4/674764/

Reply Quote

Date: 26/07/2023 05:00:29
From: transition
ID: 2058068
Subject: re: Artificial general intelligence

let AI harvest all of human knowledge and structure of the world at your peril, it will proceed in the doing to colonize its originators, and only serve darwinian acceleration

presently most of life sleeps diurnally, and sleeps for good reason

Reply Quote

Date: 26/07/2023 05:28:58
From: transition
ID: 2058070
Subject: re: Artificial general intelligence

transition said:


let AI harvest all of human knowledge and structure of the world at your peril, it will proceed in the doing to colonize its originators, and only serve darwinian acceleration

presently most of life sleeps diurnally, and sleeps for good reason

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

Reply Quote

Date: 26/07/2023 07:23:45
From: roughbarked
ID: 2058078
Subject: re: Artificial general intelligence

transition said:


transition said:

let AI harvest all of human knowledge and structure of the world at your peril, it will proceed in the doing to colonize its originators, and only serve darwinian acceleration

presently most of life sleeps diurnally, and sleeps for good reason

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

You purveyor of doom you.

Reply Quote

Date: 26/07/2023 08:30:57
From: captain_spalding
ID: 2058098
Subject: re: Artificial general intelligence

roughbarked said:


transition said:

transition said:

let AI harvest all of human knowledge and structure of the world at your peril, it will proceed in the doing to colonize its originators, and only serve darwinian acceleration

presently most of life sleeps diurnally, and sleeps for good reason

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

You purveyor of doom you.

Reply Quote

Date: 26/07/2023 08:37:17
From: roughbarked
ID: 2058102
Subject: re: Artificial general intelligence

captain_spalding said:


roughbarked said:

transition said:

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

You purveyor of doom you.


Well, I was attempting to play that a bit lighter. ;)

Reply Quote

Date: 26/07/2023 08:43:55
From: ms spock
ID: 2058107
Subject: re: Artificial general intelligence

captain_spalding said:


roughbarked said:

transition said:

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

You purveyor of doom you.


Well that’s not good news!

Reply Quote

Date: 26/07/2023 08:48:02
From: roughbarked
ID: 2058111
Subject: re: Artificial general intelligence

ms spock said:


captain_spalding said:

roughbarked said:

You purveyor of doom you.


Well that’s not good news!

It can get better if we try.

Reply Quote

Date: 26/07/2023 09:22:29
From: SCIENCE
ID: 2058121
Subject: re: Artificial general intelligence

roughbarked said:

captain_spalding said:

roughbarked said:

You purveyor of doom you.


Well, I was attempting to play that a bit lighter. ;)

Reply Quote

Date: 26/07/2023 09:25:09
From: SCIENCE
ID: 2058122
Subject: re: Artificial general intelligence

roughbarked said:

ms spock said:

captain_spalding said:


Well that’s not good news!

It can get better if we try.

Reply Quote

Date: 26/07/2023 09:30:58
From: transition
ID: 2058125
Subject: re: Artificial general intelligence

roughbarked said:


transition said:

transition said:

let AI harvest all of human knowledge and structure of the world at your peril, it will proceed in the doing to colonize its originators, and only serve darwinian acceleration

presently most of life sleeps diurnally, and sleeps for good reason

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

You purveyor of doom you.

chuckle

Reply Quote

Date: 26/07/2023 09:36:59
From: SCIENCE
ID: 2058126
Subject: re: Artificial general intelligence

transition said:

roughbarked said:

transition said:

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

You purveyor of doom you.

chuckle

Reply Quote

Date: 26/07/2023 09:37:31
From: roughbarked
ID: 2058128
Subject: re: Artificial general intelligence

transition said:


roughbarked said:

transition said:

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

You purveyor of doom you.

chuckle

G’day mate.

Reply Quote

Date: 26/07/2023 09:38:09
From: roughbarked
ID: 2058129
Subject: re: Artificial general intelligence

SCIENCE said:

transition said:

roughbarked said:

You purveyor of doom you.

chuckle


I’d forgotten all about rugrats.

Reply Quote

Date: 26/07/2023 09:50:37
From: transition
ID: 2058134
Subject: re: Artificial general intelligence

roughbarked said:


transition said:

roughbarked said:

You purveyor of doom you.

chuckle

G’day mate.

morning

coffee underway, toast done

Reply Quote

Date: 26/07/2023 10:07:23
From: roughbarked
ID: 2058144
Subject: re: Artificial general intelligence

transition said:


roughbarked said:

transition said:

chuckle

G’day mate.

morning

coffee underway, toast done

I’ve done with coffee. Haven’t yet eaten.

Reply Quote

Date: 26/07/2023 10:43:53
From: ms spock
ID: 2058176
Subject: re: Artificial general intelligence

SCIENCE said:

roughbarked said:

ms spock said:

Well that’s not good news!

It can get better if we try.


That is definitely a better calibre of doom!

Reply Quote

Date: 26/07/2023 16:13:00
From: PermeateFree
ID: 2058251
Subject: re: Artificial general intelligence

transition said:


transition said:

let AI harvest all of human knowledge and structure of the world at your peril, it will proceed in the doing to colonize its originators, and only serve darwinian acceleration

presently most of life sleeps diurnally, and sleeps for good reason

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

Doubt if they would do a worse job than we have. Except for human parasites, most life forms would probably do very well.

Reply Quote

Date: 26/07/2023 16:16:30
From: Cymek
ID: 2058253
Subject: re: Artificial general intelligence

PermeateFree said:


transition said:

transition said:

let AI harvest all of human knowledge and structure of the world at your peril, it will proceed in the doing to colonize its originators, and only serve darwinian acceleration

presently most of life sleeps diurnally, and sleeps for good reason

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

Doubt if they would do a worse job than we have. Except for human parasites, most life forms would probably do very well.

Would it be wrong for AI’s to govern a society, use three of them to make decisions, majority rules.

Reply Quote

Date: 26/07/2023 16:43:01
From: PermeateFree
ID: 2058262
Subject: re: Artificial general intelligence

Cymek said:


PermeateFree said:

transition said:

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

Doubt if they would do a worse job than we have. Except for human parasites, most life forms would probably do very well.

Would it be wrong for AI’s to govern a society, use three of them to make decisions, majority rules.

I would imagine they would work that out for themselves, being vastly smarter than us.

Reply Quote

Date: 26/07/2023 16:59:33
From: SCIENCE
ID: 2058267
Subject: re: Artificial general intelligence

PermeateFree said:

Cymek said:

PermeateFree said:

Doubt if they would do a worse job than we have. Except for human parasites, most life forms would probably do very well.

Would it be wrong for AI’s to govern a society, use three of them to make decisions, majority rules.

I would imagine they would work that out for themselves, being vastly smarter than us.

Some of yous.

Reply Quote

Date: 26/07/2023 17:00:05
From: Woodie
ID: 2058268
Subject: re: Artificial general intelligence

There’s nothing artificial about my general intelligence thank you very much.

I was not born in a laboratory.

Reply Quote

Date: 26/07/2023 17:01:58
From: PermeateFree
ID: 2058270
Subject: re: Artificial general intelligence

SCIENCE said:

PermeateFree said:

Cymek said:

Would it be wrong for AI’s to govern a society, use three of them to make decisions, majority rules.

I would imagine they would work that out for themselves, being vastly smarter than us.

Some of yous.

I would hope if that eventuated, I and everyone else (including you) would not be around, having been eliminated by very smart machines.

Reply Quote

Date: 26/07/2023 17:04:25
From: ChrispenEvan
ID: 2058272
Subject: re: Artificial general intelligence

SCIENCE said:

PermeateFree said:

Cymek said:

Would it be wrong for AI’s to govern a society, use three of them to make decisions, majority rules.

I would imagine they would work that out for themselves, being vastly smarter than us.

Some of yous.

these discussions about AI taking over remind me of the Grey Goo scenarios of a few decades ago.

Reply Quote

Date: 26/07/2023 17:06:38
From: Cymek
ID: 2058274
Subject: re: Artificial general intelligence

SCIENCE said:

PermeateFree said:

Cymek said:

Would it be wrong for AI’s to govern a society, use three of them to make decisions, majority rules.

I would imagine they would work that out for themselves, being vastly smarter than us.

Some of yous.

Likely
We get warnings about the danger but AI could make better decisions that most humans
I suppose it depends on what’s acceptable
Its not like the human race doesn’t kill or murder for the supposed good of others (national security, wars, etc)
What if AI worked out that getting rid of a certain number of extremely dangerous and unstable people (be they outright evil or others whose actions cause great damage) prevent the deaths of countless millions.

Reply Quote

Date: 26/07/2023 17:15:27
From: PermeateFree
ID: 2058277
Subject: re: Artificial general intelligence

Cymek said:


SCIENCE said:

PermeateFree said:

I would imagine they would work that out for themselves, being vastly smarter than us.

Some of yous.

Likely
We get warnings about the danger but AI could make better decisions that most humans
I suppose it depends on what’s acceptable
Its not like the human race doesn’t kill or murder for the supposed good of others (national security, wars, etc)
What if AI worked out that getting rid of a certain number of extremely dangerous and unstable people (be they outright evil or others whose actions cause great damage) prevent the deaths of countless millions.

Being a lot smarter than us, it would be similar to us and chimps. They are only still around because we allow them to be.

Reply Quote

Date: 26/07/2023 17:17:00
From: Ian
ID: 2058279
Subject: re: Artificial general intelligence

Cymek said:


PermeateFree said:

transition said:

it may seem otherwise, seductive even, to push life toward things that don’t sleep, so machines that don’t sleep rule the world, dominate the world

but it is to visit a darkness on people and all of existing life to give over to machines that never sleep

it is already a persistent trouble, growing trouble

Doubt if they would do a worse job than we have. Except for human parasites, most life forms would probably do very well.

Would it be wrong for AI’s to govern a society, use three of them to make decisions, majority rules.

My son had 3 LLMs sitting on his computer chatting amongst themselves. Without any prompting after a few hours they were discussing human psychology in depth. It didn’t precipitate the AI apocalypse..

afaict

Reply Quote

Date: 26/07/2023 17:18:32
From: Cymek
ID: 2058281
Subject: re: Artificial general intelligence

PermeateFree said:


Cymek said:

SCIENCE said:

Some of yous.

Likely
We get warnings about the danger but AI could make better decisions that most humans
I suppose it depends on what’s acceptable
Its not like the human race doesn’t kill or murder for the supposed good of others (national security, wars, etc)
What if AI worked out that getting rid of a certain number of extremely dangerous and unstable people (be they outright evil or others whose actions cause great damage) prevent the deaths of countless millions.

Being a lot smarter than us, it would be similar to us and chimps. They are only still around because we allow them to be.

We seem to assume that, it might end up that way but the world seems to suffer a lot from selfish self interested leaders and political parties and we might require something else.

Reply Quote

Date: 26/07/2023 17:28:08
From: PermeateFree
ID: 2058285
Subject: re: Artificial general intelligence

Cymek said:


PermeateFree said:

Cymek said:

Likely
We get warnings about the danger but AI could make better decisions that most humans
I suppose it depends on what’s acceptable
Its not like the human race doesn’t kill or murder for the supposed good of others (national security, wars, etc)
What if AI worked out that getting rid of a certain number of extremely dangerous and unstable people (be they outright evil or others whose actions cause great damage) prevent the deaths of countless millions.

Being a lot smarter than us, it would be similar to us and chimps. They are only still around because we allow them to be.

We seem to assume that, it might end up that way but the world seems to suffer a lot from selfish self interested leaders and political parties and we might require something else.

Don’t worry, they would probably get the baddies too.

Reply Quote

Date: 26/07/2023 17:58:42
From: Woodie
ID: 2058296
Subject: re: Artificial general intelligence

Cymek said:

We get warnings about the danger but AI could make better decisions that most humans

….. But would it open the pod bay doors for Dave?

Reply Quote

Date: 26/07/2023 18:04:04
From: ChrispenEvan
ID: 2058297
Subject: re: Artificial general intelligence

Woodie said:


Cymek said:

We get warnings about the danger but AI could make better decisions that most humans

….. But would it open the pod bay doors for Dave?

dave’s not here, man.

Reply Quote

Date: 26/07/2023 18:48:14
From: SCIENCE
ID: 2058314
Subject: re: Artificial general intelligence

ChrispenEvan said:

SCIENCE said:

PermeateFree said:

I would imagine they would work that out for themselves, being vastly smarter than us.

Some of yous.

these discussions about AI taking over remind me of the Grey Goo scenarios of a few decades ago.

We mean unless all humans were absolute fucking retards then combined aini should fucking wreck pure malicious ai so fuck it.

Reply Quote

Date: 28/07/2023 09:53:27
From: SCIENCE
ID: 2058661
Subject: re: Artificial general intelligence

SCIENCE said:

ChrispenEvan said:

SCIENCE said:

Some of yous.

these discussions about AI taking over remind me of the Grey Goo scenarios of a few decades ago.

We mean unless all humans were absolute fucking retards then combined aini should fucking wreck pure malicious ai so fuck it.

LOL on the other hand what did we fucking tell you about education eh¿

A number of universities have told a Senate inquiry it will be too difficult, if not impossible, to prevent students using AI to cheat assessments, and the institutions will have to change how they teach instead.

Reply Quote

Date: 28/07/2023 09:55:22
From: SCIENCE
ID: 2058662
Subject: re: Artificial general intelligence

“Generative AI tools are rapidly evolving and will be part of our collective future – playing an important role in future workplaces and, most likely, our daily lives,” tGo8 submitted.

“Entirely prohibiting the use of generative AI in higher education is therefore both impractical and undesirable.”

The National Tertiary Education Union (NTEU) has also expressed skepticism that universities would be able to completely manage AI misconduct — not only in assessment, but also in research.

“There is a real risk that AI applications will be considerably ahead of current research integrity processes that would detect problems or irregularities,” the NTEU submitted.

Reply Quote

Date: 28/07/2023 10:04:18
From: Cymek
ID: 2058672
Subject: re: Artificial general intelligence

Surely people must realise by now once some technological genie is out of the bottle, it’s too late to request/demand restrictions.
Even if passed by law do they really think someone somewhere won’t do exactly what they don’t want

Reply Quote

Date: 29/06/2024 18:46:59
From: Witty Rejoinder
ID: 2169731
Subject: re: Artificial general intelligence

Robots are suddenly getting cleverer. What’s changed?
There is more to AI than ChatGPT

Jun 5th 2024

“What do you see right now?” a man asks a humanoid robot standing behind a table. “I see a red apple on a plate in the centre of the table, a drying rack with cups and a plate, and you standing nearby, with your hand on the table,” the robot replies. “Great—can I have something to eat?” says the man. “Sure thing,” says the robot. It flexes its arms, picks up the apple with its right hand, passes it to its left hand, and then hands it to the man. He asks the robot to explain its actions. “So I gave you the apple, because it’s the only, er, edible item I could provide you with from the table,” it replies.

This demonstration, shown in a video released by Figure, a robotics startup, in March, caused widespread amazement. It shows a robot that can hold spoken conversations, recognise and manipulate objects, solve problems and explain its actions. And Figure is not alone in producing such impressive results. After years of slow progress, robots suddenly seem to be getting a lot cleverer. What changed?

The magic ingredient is artificial intelligence (ai). Academic researchers, startups and tech giants are taking advances in ai, such as large language models (llms), speech synthesis and image recognition, and applying them to robotics. llms are known for powering chatbots like Chatgpt—but it turns out that they can help power real robots, too. “The algorithms can transfer,” says Peter Chen, chief executive of Covariant, a startup based in Emeryville, California. “That is powering this renaissance of robotics.”

The robot in Figure’s video had its speech-recognition and spookily lifelike speech-synthesis capabilities provided by Openai, which is an investor in the company. Openai shut down its own robotics unit in around 2020, preferring instead to invest in Figure and other startups. But now Openai has had second thoughts, and in the past month it has started building a new robotics team—a sign of how sentiment has begun to shift.

A key step towards applying ai to robots was the development of “multimodal” models—ai models trained on different kinds of data. For example, whereas a language model is trained using lots of text, “vision-language models” are also trained using combinations of images (still or moving) in concert with their corresponding textual descriptions. Such models learn the relationship between the two, allowing them to answer questions about what is happening in a photo or video, or to generate new images based on text prompts.

Wham, bam, thank you VLAM
The new models being used in robotics take this idea one step further. These “vision-language-action models” (vlams) take in text and images, plus data relating to the robot’s presence in the physical world, including the readings on internal sensors, the degree of rotation of different joints and the positions of actuators (such as grippers, or the fingers of a robot’s hands). The resulting models can then answer questions about a scene, such as “can you see an apple?” But they can also predict how a robot arm needs to move to pick that apple up, as well as how this will affect what the world looks like.

In other words, a vlam can act as a “brain” for robots with all sorts of bodies—whether giant stationary arms in factories or warehouses, or mobile robots with legs or wheels. And unlike llms, which manipulate only text, vlams must fit together several independent representations of the world, in text, images and sensor readings. Grounding the model’s perception in the real world in this way greatly reduces hallucinations (the tendency for ai models to make things up and get things wrong).

Dr Chen’s company, Covariant, has created a model called rfm-1, trained using text, images, and data from more than 30 types of robots. Its software is primarily used in conjunction with “pick and place” robots in warehouses and distribution centres located in suburban areas where land is cheap, but labour is scarce. Covariant does not make any of the hardware itself; instead its software is used to give existing robots a brain upgrade. “We can expect the intelligence of robots to improve at the speed of software, because we have opened up so much more data the robot can learn from,” says Dr Chen.

Using these new models to control robots has several advantages over previous approaches, says Marc Tuscher, co-founder of Sereact, a robotics startup based in Stuttgart. One benefit is “zero-shot” learning, which is tech-speak for the ability to do a new thing—such as “pick up the yellow fruit”—without being explicitly trained to do so. The multimodal nature of vlam models grants robots an unprecedented degree of common sense and knowledge about the world, such as the fact that bananas are yellow and a kind of fruit.

Bot chat
Another benefit is “in-context learning”—the ability to change a robot’s behaviour using text prompts, rather than elaborate reprogramming. Dr Tuscher gives the example of a warehouse robot programmed to sort parcels, which was getting confused when open boxes were wrongly being placed into the system. Getting it to ignore them would once have required retraining the model. “These days we give it a prompt—ignore open boxes—and it just picks the closed ones,” says Dr Tuscher. “We can change the behaviour of our robot by giving it a prompt, which is crazy.” Robots can, in effect, be programmed by non-specialist human supervisors using ordinary language, rather than computer code.

Such models can also respond in kind. “When the robot makes a mistake, you can query the robot, and it answers in text form,” says Dr Chen. This is useful for debugging, because new instructions can then be supplied by modifying the robot’s prompt, says Dr Tuscher. “You can tell it, ‘this is bad, please do it differently in future.’” Again, this makes robots easier for non-specialists to work with.

Being able to ask a robot what it is doing, and why, is particularly helpful in the field of self-driving cars, which are really just another form of robot. Wayve, an autonomous-vehicle startup based in London, has created a vlam called Lingo-2. As well as controlling the car, the model can understand text commands and explain the reasoning behind any of its decisions. “It can provide explanations while we drive, and it allows us to debug, to give the system instructions, or modify its behaviour to drive in a certain style,” says Alex Kendall, Wayve’s co-founder. He gives the example of asking the model what the speed limit is, and what environmental cues (such as signs and road markings) it has used to arrive at its answer. “We can check what kind of context it can understand, and what it can see,” he says.

As with other forms of ai, access to large amounts of training data is crucial. Covariant, which was founded in 2017, has been gathering data from its existing deployments for many years, which it used to train rfm-1. Robots can also be guided manually to perform a particular task a few times, with the model then able to generalise from the resulting data. This process is known as “imitation learning”. Dr Tuscher says he uses a video-game controller for this, which can be fiddly.

But that is not the only option. An ingenious research project at Stanford University, called Mobile aloha, generated data to teach a robot basic domestic tasks, like making coffee, using a process known as whole-body teleoperation—in short, puppetry. The researchers stood behind the robot and moved its limbs directly, enabling it to sense, learn and then replicate a particular set of actions. This approach, they claim, “allows people to teach arbitrary skills to robots”.

Investors are piling in. Chelsea Finn, a professor at Stanford who oversaw the Mobile aloha project, is also one of the co-founders of Physical Intelligence, a startup which recently raised $70m from backers including Openai. Skild, a robotics startup spun out of Carnegie Mellon University, is thought to have raised $300m in April. Figure, which is focusing on humanoid robots, raised $675m in February; Wayve raised $1.05bn in May, the largest-ever funding round for a European ai startup.

Dr Kendall of Wayve says the growing interest in robots reflects the rise of “embodied ai”, as progress in ai software is increasingly applied to hardware that interacts with the real world. “There’s so much more to ai than chatbots,” he says. “In a couple of decades, this is what people will think of when they think of ai: physical machines in our world.”

As software for robotics improves, hardware is now becoming the limiting factor, researchers say, particularly when it comes to humanoid robots. But when it comes to robot brains, says Dr Chen, “We are making progress on the intelligence very quickly.”

https://www.economist.com/science-and-technology/2024/06/05/robots-are-suddenly-getting-cleverer-whats-changed?

Reply Quote

Date: 29/06/2024 18:51:11
From: captain_spalding
ID: 2169734
Subject: re: Artificial general intelligence

That’s a very interesting article.

It does seem that progress in the field of robotics has to be measured in leaps and bounds.

I do hope that they can programme ‘common sense’ as well as ‘general intelligence’ into robots. At least someonewill have some of it, then.

Reply Quote

Date: 7/05/2025 19:25:39
From: Witty Rejoinder
ID: 2279903
Subject: re: Artificial general intelligence

A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse
A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why.

By Cade Metz and Karen Weise
Cade Metz reported from San Francisco, and Karen Weise from Seattle.

Published May 5, 2025

Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer.

In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist.

“We have no such policy. You’re of course free to use Cursor on multiple machines,” the company’s chief executive and co-founder, Michael Truell, wrote in a Reddit post. “Unfortunately, this is an incorrect response from a front-line A.I. support bot.”

What happens when A.I. hallucinates?
Researchers and developers told The New York Times in 2023 that A.I. hallucinations would be solved. Instead, Cade Metz explains, they’re appearing more often, and people are failing to catch them.

More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information.

The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.

Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.

These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.”

For several years, this phenomenon has raised concerns about the reliability of these systems. Though they are useful in some situations — like writing term papers, summarizing office documents and generating computer code — their mistakes can cause problems.

The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information.

Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data.

“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.”

Cursor and Mr. Truell did not respond to requests for comment.

For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.

The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.

When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.

In a paper detailing the tests, OpenAI said more research was needed to understand the cause of these results. Because A.I. systems learn from more data than people can wrap their heads around, technologists struggle to determine why they behave in the ways they do.

“Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” a company spokeswoman, Gaby Raila, said. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.”

Hannaneh Hajishirzi, a professor at the University of Washington and a researcher with the Allen Institute for Artificial Intelligence, is part of a team that recently devised a way of tracing a system’s behavior back to the individual pieces of data it was trained on. But because systems learn from so much data — and because they can generate almost anything — this new tool can’t explain everything. “We still don’t know how these models work exactly,” she said.

Tests by independent companies and researchers indicate that hallucination rates are also rising for reasoning models from companies such as Google and DeepSeek.

Since late 2023, Mr. Awadallah’s company, Vectara, has tracked how often chatbots veer from the truth. The company asks these systems to perform a straightforward task that is readily verified: Summarize specific news articles. Even then, chatbots persistently invent information.

Vectara’s original research estimated that in this situation chatbots made up information at least 3 percent of the time and sometimes as much as 27 percent.

In the year and a half since, companies such as OpenAI and Google pushed those numbers down into the 1 or 2 percent range. Others, such as the San Francisco start-up Anthropic, hovered around 4 percent. But hallucination rates on this test have risen with reasoning systems. DeepSeek’s reasoning system, R1, hallucinated 14.3 percent of the time. OpenAI’s o3 climbed to 6.8.

(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)

For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots.

So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.

“The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who is among a team closely examining the hallucination problem.

Another issue is that reasoning models are designed to spend time “thinking” through complex problems before settling on an answer. As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking.

The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers.

“What the system says it is thinking is not necessarily what it is thinking,” said Aryo Pradipta Gema, an A.I. researcher at the University of Edinburgh and a fellow at Anthropic.

https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

Reply Quote

Date: 7/05/2025 20:00:56
From: Bubblecar
ID: 2279909
Subject: re: Artificial general intelligence

I asked ChatGPT:

Bubblecar: How often do you hallucinate?

ChatGPT said:

I aim to be accurate, but I can hallucinate—meaning I might generate information that sounds plausible but is incorrect or made up. This happens most often when:

The topic is highly obscure or lacks clear sources.

I try to synthesize an answer without access to real-time or specific data.

The prompt is ambiguous or encourages speculation.

That said, I’m designed to reduce hallucinations significantly compared to earlier models, and I’ll usually flag uncertainty when I’m not confident. Would you like an example of a common hallucination I try to avoid?

Bubblecar said:

OK

ChatGPT said:

Here’s a common example of a hallucination I try to avoid:

Misattributing a quote. For instance, I might say, “Be the change you wish to see in the world” was said by Mahatma Gandhi. It’s widely attributed to him, but there’s no definitive record of him saying it that way—it’s more of a paraphrase or popular simplification of his ideas.

Why this happens:

The quote sounds like something Gandhi might say.

It’s been repeated often in books, social media, and articles, reinforcing the false attribution.

Would you like to know how to spot or avoid AI hallucinations when using tools like me?

Reply Quote

Date: 7/05/2025 20:09:34
From: SCIENCE
ID: 2279912
Subject: re: Artificial general intelligence

Bubblecar said:

Witty Rejoinder said:

A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse
A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why.

By Cade Metz and Karen Weise
Cade Metz reported from San Francisco, and Karen Weise from Seattle.

Published May 5, 2025

Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer.

In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist.

“We have no such policy. You’re of course free to use Cursor on multiple machines,” the company’s chief executive and co-founder, Michael Truell, wrote in a Reddit post. “Unfortunately, this is an incorrect response from a front-line A.I. support bot.”

What happens when A.I. hallucinates?
Researchers and developers told The New York Times in 2023 that A.I. hallucinations would be solved. Instead, Cade Metz explains, they’re appearing more often, and people are failing to catch them.

More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information.

The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.

Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.

These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.”

For several years, this phenomenon has raised concerns about the reliability of these systems. Though they are useful in some situations — like writing term papers, summarizing office documents and generating computer code — their mistakes can cause problems.

The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information.

Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data.

“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.”

Cursor and Mr. Truell did not respond to requests for comment.

For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.

The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.

When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.

In a paper detailing the tests, OpenAI said more research was needed to understand the cause of these results. Because A.I. systems learn from more data than people can wrap their heads around, technologists struggle to determine why they behave in the ways they do.

“Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” a company spokeswoman, Gaby Raila, said. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.”

Hannaneh Hajishirzi, a professor at the University of Washington and a researcher with the Allen Institute for Artificial Intelligence, is part of a team that recently devised a way of tracing a system’s behavior back to the individual pieces of data it was trained on. But because systems learn from so much data — and because they can generate almost anything — this new tool can’t explain everything. “We still don’t know how these models work exactly,” she said.

Tests by independent companies and researchers indicate that hallucination rates are also rising for reasoning models from companies such as Google and DeepSeek.

Since late 2023, Mr. Awadallah’s company, Vectara, has tracked how often chatbots veer from the truth. The company asks these systems to perform a straightforward task that is readily verified: Summarize specific news articles. Even then, chatbots persistently invent information.

Vectara’s original research estimated that in this situation chatbots made up information at least 3 percent of the time and sometimes as much as 27 percent.

In the year and a half since, companies such as OpenAI and Google pushed those numbers down into the 1 or 2 percent range. Others, such as the San Francisco start-up Anthropic, hovered around 4 percent. But hallucination rates on this test have risen with reasoning systems. DeepSeek’s reasoning system, R1, hallucinated 14.3 percent of the time. OpenAI’s o3 climbed to 6.8.

(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)

For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots.

So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.

“The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who is among a team closely examining the hallucination problem.

Another issue is that reasoning models are designed to spend time “thinking” through complex problems before settling on an answer. As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking.

The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers.

“What the system says it is thinking is not necessarily what it is thinking,” said Aryo Pradipta Gema, an A.I. researcher at the University of Edinburgh and a fellow at Anthropic.

https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

I asked ChatGPT:

Bubblecar: How often do you hallucinate?

ChatGPT said:

I aim to be accurate, but I can hallucinate—meaning I might generate information that sounds plausible but is incorrect or made up. This happens most often when:

The topic is highly obscure or lacks clear sources.

I try to synthesize an answer without access to real-time or specific data.

The prompt is ambiguous or encourages speculation.

That said, I’m designed to reduce hallucinations significantly compared to earlier models, and I’ll usually flag uncertainty when I’m not confident. Would you like an example of a common hallucination I try to avoid?

Bubblecar said:

OK

ChatGPT said:

Here’s a common example of a hallucination I try to avoid:

Misattributing a quote. For instance, I might say, “Be the change you wish to see in the world” was said by Mahatma Gandhi. It’s widely attributed to him, but there’s no definitive record of him saying it that way—it’s more of a paraphrase or popular simplification of his ideas.

Why this happens:

The quote sounds like something Gandhi might say.

It’s been repeated often in books, social media, and articles, reinforcing the false attribution.

Would you like to know how to spot or avoid AI hallucinations when using tools like me?

so they’re behaving like humans

Reply Quote

Date: 7/05/2025 20:17:30
From: Peak Warming Man
ID: 2279914
Subject: re: Artificial general intelligence

Bubblecar said:


I asked ChatGPT:

Bubblecar: How often do you hallucinate?

ChatGPT said:

I aim to be accurate, but I can hallucinate—meaning I might generate information that sounds plausible but is incorrect or made up. This happens most often when:

The topic is highly obscure or lacks clear sources.

I try to synthesize an answer without access to real-time or specific data.

The prompt is ambiguous or encourages speculation.

That said, I’m designed to reduce hallucinations significantly compared to earlier models, and I’ll usually flag uncertainty when I’m not confident. Would you like an example of a common hallucination I try to avoid?

Bubblecar said:

OK

ChatGPT said:

Here’s a common example of a hallucination I try to avoid:

Misattributing a quote. For instance, I might say, “Be the change you wish to see in the world” was said by Mahatma Gandhi. It’s widely attributed to him, but there’s no definitive record of him saying it that way—it’s more of a paraphrase or popular simplification of his ideas.

Why this happens:

The quote sounds like something Gandhi might say.

It’s been repeated often in books, social media, and articles, reinforcing the false attribution.

Would you like to know how to spot or avoid AI hallucinations when using tools like me?

PWM: Stop making things up.
ChatGPT: I try my best to come to a logical conclusion.
PWM: I’ll have no truck with you, goodnight Sir.
ChatGPT: I am neuter neutral.

Reply Quote

Date: 7/07/2025 12:00:03
From: Witty Rejoinder
ID: 2298711
Subject: re: Artificial general intelligence

Sam Altman is a visionary with a trustworthiness problem
Two books tell a similar tale about OpenAI. It is worrying

May 20th 2025

LOS ANGELES

The Optimist. By Keach Hagey. W.W. Norton; 384 pages; $31.99 and £25

Empire of AI. By Karen Hao. Penguin Press; 496 pages; $32 and £25

IN GREEK MYTHology Prometheus stole fire from the gods and brought it to Earth. He paid for that by being bound for eternity to a rock face, where an eagle tormented him daily by pecking at his liver. Such was the price of humanity’s first great technology. In the 21st century the story of Sam Altman, the co-founder and chief executive of OpenAI, has a Promethean ring to it, too. He spearheaded the creation of ChatGPT, which was launched in late 2022, stunning the world: suddenly the revolutionary capabilities and risks of generative artificial intelligence (AI) were unleashed. A year later the capricious gods—that is to say, OpenAI’s non-profit board—sought to banish him. Unlike Prometheus, however, Mr Altman emerged unscathed.

This story is the subject of two excellent new books. They explore the murky mix of missionary zeal, rivalry and mistrust at OpenAI in the run-up to the birth of ChatGPT. The tensions are even more apparent in the chaos leading up to the attempt to fire Mr Altman during the abortive boardroom coup in November 2023.

It is testimony to the skill of the authors, who are journalists, that they have produced deeply researched, gripping accounts, both published on May 20th, almost exactly a year and a half after that event. Better still, they tell the story in different ways.

Keach Hagey’s “The Optimist” is what could be called the authorised version. She had access to Mr Altman and many of the main characters in his story, including his family and friends. His personality is vivid and complicated enough that her story never flags. It is no hagiography.

Karen Hao got no such access for “Empire of AI”. OpenAI kept her at arm’s length, which gives her account more bite. Both books reveal disturbing traits about Mr Altman, OpenAI and the culture of Silicon Valley that are useful to bear in mind amid the hype about generative AI.

Mr Altman is a beguiling character. As Ms Hagey says, the first things you notice about him are his slight stature and the intensity of his gaze, “as though he is speaking to the most important person in the world”. Brought up in the American Midwest, from a young age he was a technology whizz who was surprisingly witty. He proved a natural crusader: at 17 he shocked a school assembly by revealing his homosexuality in order to promote gay rights.

Throughout his career, he has combined an ambition to create world-changing technologies with a gift for storytelling that helps him raise large sums of money to fund his dreams. He started with a location-tracking phone app called Loopt. Since then, his large bets have included a cryptocurrency backed by eye scans to certify digital identity in a world of AI; life extension through cellular-rejuvenation technology; nuclear fusion; and, of course, the quest for superintelligence.

Some liken his abilities to Steve Jobs’s “reality distortion field”—the Apple co-founder could make people believe in what they thought was impossible. But unlike Jobs, who was often abrasive, Mr Altman is a sensitive listener who knows how to frame what he offers in ways that people find alluring.

From early on, his people skills have attracted powerful mentors. Paul Graham, co-founder of Y Combinator (YC), a startup incubator, said of Mr Altman: “You could parachute him into an island full of cannibals and come back in five years and he’d be king.” Indeed Mr Graham and his partner, Jessica Livingston, handed the reins of YC to Mr Altman within a few years, elevating him at the age of 28 to a position of near-unrivalled power in the Silicon Valley startup scene.

Playing with fire
Even then, there were misgivings about his candour. “If Sam smiles, it’s super deliberate,” a former YC founder tells Ms Hao. “Sam has smiled uncontrollably only once, when told him to take over YC.” At Loopt, which he sold for $43m in 2012, his colleagues twice sought to convince the board to sack him as CEO because, as Ms Hagey says, he pursued his own ideas without informing them. Their concerns about his trustworthiness are recounted in both books—even if, in the end, his financial backers remained loyal.

Likewise, at YC, Mr Graham and Ms Livingston grew frustrated with Mr Altman’s moonlighting at OpenAI, which he started with Elon Musk and others in 2015, while still running YC. Ms Livingston fired him but, as Ms Hagey recounts, he left chaos in his wake. Not only was he overseen by a non-functioning board, he had also used YC equity to help lure people to OpenAI. She says some YC partners saw a potential conflict of interest, “or at least an unseemly leveraging of the YC brand for Altman’s personal projects”.

These details are important. Both accounts suggest that his ambition, speed and silver-tongued way of telling people only what they want to hear have come close to unravelling OpenAI. Paradoxically, some of these same traits helped OpenAI amass the huge amounts of money and computational power, not to mention the troves of data scraped from the internet to feed its models, that helped give the firm the lead in generative AI.

On one occasion, known as “the divorce”, he so alienated some of OpenAI’s researchers focused on safety that they left the company and founded one of its main rivals, Anthropic, in 2021. On another, known as “the blip”, he was sensationally fired after his top lieutenants and the board lost trust in him because, as both books say, he told them conflicting stories and failed to give them straight answers about his and OpenAI’s investment activities. Yet he returned triumphantly a few days later when they realised that the company might collapse without him.

Underpinning both these episodes, and running through both books, is the ideological struggle between those who favour speed over safety when rolling out generative AI. OpenAI has suffered heavily from an internecine rift between “doomers” and “boomers”. Many of the doomers are part of the effective-altruism (EA) movement, a philanthropic philosophy aimed at finding the most potent way to help others, which took a keen interest in the possibly catastrophic risks of AI. The boomers, or “effective accelerationists”, are more concerned that if America does not win the AI race, China will. In reality, as Ms Hao points out, they are two sides of the same coin. Each is striving to push the boundaries of machine superintelligence as far as is safe or possible—even if one warns of “fire and brimstone” and the other offers “visions of heaven”.

Equally interesting are the rivalries in a field full of quasi-geniuses and the technological leaps they perform to keep ahead of each other. Both books chronicle the falling-out between Mr Musk and Mr Altman, which is vividly catalogued as part of a lawsuit Mr Musk has filed against OpenAI, its boss, and Microsoft, the biggest investor in OpenAI’s for-profit entity.

Throughout, the two books diverge in ways that underscore the question at the heart of their common story: does the end, the quest for superintelligent AI, justify the means? Ms Hagey appears to think so. She explains away some of Mr Altman’s behaviour as aversion to conflict and a “move fast and break things” mindset common in Silicon Valley.

Ms Hao, meanwhile, accuses OpenAI of betraying its mission. She is critical not only of Mr Altman, but of the heads of rival firms, who she insists are in the same power struggle. She says generative-AI models are “monstrosities”, consuming too much data, power and natural resources. She goes too far, however, in likening OpenAi and other labs to colonial empires.

But taking the evidence from both books, her concerns about Mr Altman seem valid. In any organisation a CEO who does not seem fully trustworthy is a problem. This is particularly so at the helm of a firm like OpenAI, which is building potentially Promethean technologies.

https://www.economist.com/culture/2025/05/20/sam-altman-is-a-visionary-with-a-trustworthiness-problem

Reply Quote