'Superhuman AI will kill us all — we must stop developing it'

'Superhuman AI will kill us all — we must stop developing it'

DAILY STAR

An artificial intelligence expert has given the stark warning that all humanity will die at the hands of a “hostile superhuman” version of the software.

Eliezer Yudkowsky, an American decision theory and artificial intelligence theorist and writer, made the comments writing for Time in response to an open letter that called for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”

GPT-4, or Generative Pre-trained Transformer 4, is a multimodal large language model created by the artificial intelligence research laboratory OpenAI – and the fourth in its GPT series.

I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it,” Yudkowsky wrote, saying that pausing its development isn’t enough, only halting it will do.

It comes after the Future of Life Institute published an open letter about the technology, signed by Elon Musk and hundreds of other prominent AI researchers and commentators.

More

Leave a Reply

Your email address will not be published. Required fields are marked *

'Superhuman AI will kill us all — we must stop developing it'

 

Log In

Or with username:

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.