Jump to content

Are "Superintelligent" Computers an Existential Threat to Our Future?


Ramsay Gimp

Recommended Posts

The Atlantic seems to think so...



http://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/




"Longer term—and this might mean 10 years, it might mean 50 or 100 years, depending on who you ask—when computers can do everything we can do," Tegmark said, “after that they will probably very rapidly get vastly better than us at everything, and we’ll face this question we talked about in the Huffington Post article: whether there’s really a place for us after that, or not.” I imagined glances from nearby museum-goers.


"This is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”



.......

“Not necessarily robots,” Wilczek told me, “although robot warriors could be a big issue, too. It could just be superintelligence that’s in a cloud. It doesn’t have to be embodied in the usual sense.”

Bostrom has said it’s important not to anthropomorphize artificial intelligence. It's best to think of it as a primordial force of nature—strong and indifferent. In the case of chess, an A.I. models chess moves, predicts outcomes, and moves accordingly. If winning at chess meant destroying humanity, it might do that. Even if programmers tried to program an A.I. to be benevolent, it could destroy us inadvertently. Andersen’s example in Aeon is that an A.I. designed to try and maximize human happiness might think that flooding your bloodstream with heroin is the best way to do that.





The article also name-drops and links Stephen Hawking and a number of other accomplished scientists who warn about the alleged dangers of smart computers.



I'm admittedly not very knowledgeable about the capabilities of computers or AI at the moment, but this article appealed to the sci-fi lover in me. Is this mostly alarmism, or is there a real cause for worry here?


Link to comment
Share on other sites

It's possible, but I wouldn't bet any money on it happening in the near future (mostly because these predictions have been made before). Programming any sort of AI is really, really hard. For example, we use machine learning algorithms (e.g. boosted decision trees and neural networks) to separate signal from noise. It usually takes a long time to train them and make sure that they're not doing anything wrong and in the end they usually wind up something like 15% better than the simple algorithm a human came up with in a tenth of the time for the sake of having a benchmark point. If you look at other attempts at AI (like machines capable of pattern recognition or walking through a typical human environment), it generally takes years of effort to make the machine do something at the level of an average human toddler, despite the fact that the toddler can do a thousand other things whereas the machine just needs to do a single one.



What the article is talking about is not this kind of AI, but rather strong or general AI -- machines that can do all of the things we do (rather than just a single specific thing), understand why they are doing them and eventually come up with something new. Given what we have right now, the advent of strong AI in the near future seems rather unlikely to me.


Link to comment
Share on other sites

Surely the really great computers - the guys making the important decisions - are going to be so far beyond us that we simply won't register.

Well, in Lovecraft's universe we don't really register to the Great Old Ones but that doesn't stop them from being malevolent forces. We've both probably stepped on numerous insects that never "registered" to us.

But I think a computer of that great a power and "personality" would surely take some interest in its creators...but maybe I'm anthropomorphizing

What the article is talking about is not this kind of AI, but rather strong or general AI -- machines that can do all of the things we do (rather than just a single specific thing), understand why they are doing them and eventually come up with something new. Given what we have right now, the advent of strong AI in the near future seems rather unlikely to me.

But isn't just looking at where we are "right now" a bit short-sighted? A great breakthrough could come tomorrow, no? Think of how far computers and information tech have progressed in just 50 years

What stood out to me was the claim that a self-improving computer would be the "beginning of the end," so to speak. A software that can recognize its own shortcomings and improve them, gain ever-increasing amounts of knowledge, etc would soon be smarter than any human or group of humans. That can't be that far off, can it?

Link to comment
Share on other sites

I was going to post that we've nothing to worry from AI's taking over, but then I saw who the first reply in this thread came from and a cold chill ran down my spine.

We're fucked.

Is Solo actually some kind of AI that's been programmed by Anarcho-Marxists? :stunned:

Link to comment
Share on other sites

As long as you are not a capitalist or a capital letter, I think you should be fine

lowercase letters of the world unite!

Anyway, I was expecting more of a reaction to this piece. Surely Sci or the other contributors to the "Consciousness" threads have something to say?

Link to comment
Share on other sites

Developing AI is essential to mankind's survival IMHO.

Care to elaborate? I'd like to hear an opinion that contradicts the doom-and-gloom of the Atlantic piece

I wonder if anyone's working on Halo style AI. Programming AI might not be feasible at this time so I wonder if a recreation of the human brain might yield better results.

But isn't the human brain itself barely understood? I'm not sure that route would get us there any faster

Link to comment
Share on other sites

I was expecting a thread about computers taking all our jobs, I am dissapointed.

But why do all these doomsday scenarios involve some monolithic supercomputer ? Surely in the future we will be cyborgs walking around naked showing off all our fancy implants and biological enhancments ?

Link to comment
Share on other sites

All this is just Malthusian scare mongering. No one actually knows what is going to happen in 100 years, and vague ideas about 'singularities' and whatnot are speculative at best and dubious at worst. I'm sure something new will come along in the interim and we'll be worried about that something new.



And why again is Tegmark talking about this? I'm sure he can find a universe where humanity does survive.


Link to comment
Share on other sites

An AI will follow its own motivations towards its own goals.


And they won't be our own.



Why would we invite such a thing, or work to make it happen... unless we were going extinct?


Link to comment
Share on other sites

This planet suffers an extinction level event every 100k years or so. And we're a tad over due. We are pretty far away from being able to break out of the sun's gravity well currently so having access to an intelligence that is capable of millions of computations a second to help us reach a habitable planet is pretty key imo. Our time is limited anyway.

I'm a bit drunk and not up for a more elaborate explanation atm.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...