AI FEARS

I’m obviously just one of many people who are both fascinated, in love with, and scared by AI. On so many levels. The more you learn about it, the more you realize that nobody could possibly foresee the impacts it will have on the world, how it will change our lives, nor even how fast it will do so.

A group of pretty smart people recently co-signed an open letter that called for pausing the development of AI. Among the people who signed the letter were:

  • Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal
  • Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”
  • Elon Musk, CEO of SpaceX, Tesla & Twitter
  • Steve Wozniak, Co-founder, Apple
  • Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem.
  • Emad Mostaque, CEO, Stability AI
  • Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship
  • John J Hopfield, Princeton University, Professor Emeritus, inventor of associative neural networks

They called for this moratorium because “AI systems with human-competitive intelligence can pose profound risks to society and humanity”.

The ask:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

And suggest that we step back from “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities”.

They also ask for provenance and watermarking systems to help distinguish real from synthetic.

Considering the signatories, this should make all of us take a break and consider how important this is to humanity.

But Eliezer Yudkowsky, one of the founders of the field of aligning Artificial General Intelligence (he’s been working on it since 2001), says even that isn’t enough.

His outlook is dark, and the fact that he’s one of the foremost experts in the field makes it even darker:

the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

[…]

Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

[…]

Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.

[…]

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

[…]

None of this danger depends on whether or not AIs are or can be conscious

[…]

we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.

[…]

trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead

[…]

We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.

[…]

Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going.

[…]

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs.

[…]

No exceptions for governments and militaries

[…]

nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

He’s not proposing nuclear first strikes, but he’s saying that nuclear strikes would be preferable to us all getting killed by AI.

I’m now curious if he can explain in a way that a normal person can understand how he arrived at these conclusions that he calls obvious.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *