Jump to content

Are "Superintelligent" Computers an Existential Threat to Our Future?


Ramsay Gimp

Recommended Posts

Well I was looking at it more like a human developed machince, which is biological and not human. So it's still artifical, just meat based.

As to automating, think of the trust you put in people to do those things now. Why would an AI be anyless trustworthy, or anymore trust worthy as the human we already use for that job?

So more like replicants or the new cylons?

Link to comment
Share on other sites

TM,

If an AI is conscious I think it should have legal rights. The problem is defining and then testing for consiousness.

If Strong AI is created would hacking become a much more serious crime? Hacking an AI would be like beating another person to force them into some undesired action. Hacking, at a minimum, would be like an aggrivated assault. The hacker would, after all, be tearing into a conscious entity

I would agree, though I would use the term sentient. Though that's probably just quibbling with terminology.

If (big if) sentient/conscious computers actually came to be, I wouldn't be surprised if their legal status mirrored that of African-Americans. At first they would simply be treated as property, digital slaves so to speak. But an AI "abolitionist" or "civil rights" movement would no doubt arise, followed by gradual reforms aimed at respecting their rights as thinking/feelings beings (but I doubt they could ever be "equal" to humans legally due to the huge inherent difference)s.

Or the AIs could just go all django unchained on us :commie:

Link to comment
Share on other sites

Well I was looking at it more like a human developed machince, which is biological and not human. So it's still artifical, just meat based.

As to automating, think of the trust you put in people to do those things now. Why would an AI be anyless trustworthy, or anymore trust worthy as the human we already use for that job?

-By that definition wouldn't dogs/cats be "AI"? If you say no because they reproduce naturally, then would a cloned sheep be AI?

-Because most humans are still human, they may screw other people over but the vast majority won't want to initiate a doomsday scenario. Not saying AI would, but that is the concern - an entity with power over human civilization that is not itself human

Link to comment
Share on other sites

Also on the really important stuff usually more than one person is needed. Not to say there haven't been near misses, though those were usually caused by a screw up in one of the automated processes.


Link to comment
Share on other sites

So more like replicants or the new cylons?

Yeah, along those lines.

-By that definition wouldn't dogs/cats be "AI"? If you say no because they reproduce naturally, then would a cloned sheep be AI?

-Because most humans are still human, they may screw other people over but the vast majority won't want to initiate a doomsday scenario. Not saying AI would, but that is the concern - an entity with power over human civilization that is not itself human

The cloning is interesting, I am not sure of the process involved, but wouldn't it be more of a modification of nature and not completely artifical.

I'd like to think that by the time we can construct AI's, we'd also know enough to prevent them or teach them not to go on humanity destroying rampages.

Link to comment
Share on other sites

I'd like to think that by the time we can construct AI's, we'd also know enough to prevent them or teach them not to go on humanity destroying rampages.

This is a fallacy. (Not only because you tacitly assume that engineering competences are in some way linked to altruism.)

“Building an AI” is easier than “building an AI that is compatible with our values”, simply because the latter problem is more constrained. So the first even will happen before the other. At which point we all die and never reach the second event.

This is at the very core of what people like Yudkowski of the Singalarity institute are harping about. We must not try to build an AI. Instead, we must first solve the far harder problem of building a friendly AI.

Because, to use your phrase “by the time we can construct AIs” we’re dead. The message to the AI community is: stop what you’re doing! Put an extremely much more complicated in your way as an obstacle first.

(My own take on this is that the AI community doesn’t need our help in failing to build a strong AI, so Yudkowski’s fears are unfounded. There will never be a “by the time we can construct AIs”. I could be wrong.)

Link to comment
Share on other sites

Why couldn't we just build an AI (assuming we ever can build one in the first place) that is constrained to a single computer (or network of computers), that has no connections to the wider internet and no physically movable parts?



How exactly would that kill us?


Link to comment
Share on other sites

Fez,

It is possible for a single machine to have the computing power we believe is necessary for consciousness? If it has consciousness and is totally controled and ruled over by humans with no ability to change itself or interact with its environment haven't we created a slave race?

Link to comment
Share on other sites

Fez,

It is possible for a single machine to have the computing power we believe is necessary for consciousness? If it has consciousness and is totally controled and ruled over by humans with no ability to change itself or interact with its environment haven't we created a slave race?

Assuming magic consciousness computing power (I'd assume that you'd be more romantic about consciousness Scott and not just reduce it to power): Yup.

But is that such a bad thing? If we're awesome enough to create thinking robots we're good enough to program them in a manner where this is not a problem. We could surely find the right drives to make it seem the best decision for them. Or is freedom an intrinsic right more important than harm?

Why couldn't we just build an AI (assuming we ever can build one in the first place) that is constrained to a single computer (or network of computers), that has no connections to the wider internet and no physically movable parts?
How exactly would that kill us?

What exactly is the point of building complex machines to serve us only to lock them on a single computer?

Link to comment
Share on other sites

What exactly is the point of building complex machines to serve us only to lock them on a single computer?

Why would an AI need physical autonomy in order to serve us?

Link to comment
Share on other sites

What exactly is the point of building complex machines to serve us only to lock them on a single computer?

Just because its a single computer/a whole lot of connected computers not connected to anything else, doesn't mean it couldn't be tremendously helpful in problem-solving and data analysis. Just think of all the math it could do! Assuming it wants to do math anyway.

But my point is, I don't see why an AI is an inherently a threat. If it can physically affect things, sure; but if we control that, how does this all end Skynet style?

ETA:

It needs information. He's essentially advocating quarantining it.

Absolutely. Its a potentially hostile-species, why wouldn't we quarantine it? Give the computer a camera, a microphone, a flash drive, and a keyboard, and let those be the only way inputs get in.

Link to comment
Share on other sites

Just because its a single computer/a whole lot of connected computers not connected to anything else, doesn't mean it couldn't be tremendously helpful in problem-solving and data analysis. Just think of all the math it could do! Assuming it wants to do math anyway.

But my point is, I don't see why an AI is an inherently a threat. If it can physically affect things, sure; but if we control that, how does this all end Skynet style?

Point.

Castel,

I am more romantic about consciousness. I'm simply running with the materialist idiom in this hypothetical.

That makes more sense.

And, in this hypothetical, we're talking about a being built from the ground up. It's not as big a deal if it lacks autonomy (or anything else) imo since you decide what's good for it, not some unpredictable natural process that you must navigate around.

Link to comment
Share on other sites



Absolutely. Its a potentially hostile-species, why wouldn't we quarantine it? Give the computer a camera, a microphone, a flash drive, and a keyboard, and let those be the only way inputs get in.




See my post #39. If I were the “quarantined” AI, I’d make it my first priority to sweet-talk the user into unquarantining me. So for you quarantining idea to work you not only need very strange information technology (how exactly are you supposing we build an AI if we can’t use one of the most powerful parts of IT, namely networking technology), you also assume incorruptible humans. That’s even more outrageous.


Link to comment
Share on other sites

See my post #39. If I were the “quarantined” AI, I’d make it my first priority to sweet-talk the user into unquarantining me. So for you quarantining idea to work you not only need very strange information technology (how exactly are you supposing we build an AI if we can’t use one of the most powerful parts of IT, namely networking technology), you also assume incorruptible humans. That’s even more outrageous.

Of course humans are corruptible, but it would be a pretty astounding lack of safeguards for one (or even a handful) of subverted humans be enough to make any changes to such an important, and possibly deadly, thing. I'd set up a system with the computer's hardware (i'm assuming this thing would need be massive, like multiple room size; but if doesn't, this is even easier) under actual lock and key, with the keys not located anywhere near the computer. And the whole area under constant surviellence by people with guns who never come into contact with the AI, and orders to remove and detain anyone doing something suspicious.

I don't know the first thing about IT, or what it takes to build an AI, but so long as it isn't one of things where it needs the combined computing power of the entire world to build in the first place, it seems incredibly easy to keep secure; so long as the people in charge of security show the slightest bit of common sense.

Link to comment
Share on other sites

It's more the hope of increasing our own thought responses for more complicated tasks. So we'd drastically increase memory and processing speed.

I don't know if this is possible, it would depend a lot on how easily computers can be intergrated into our own thought processes.

That said I do believe it might be possible for androids to be conscious entities, perhaps if their brains had the specific microtubule lattice that may account for our awareness and memories*.

*Or whatever structure accounts for self-awareness.

I am one of those peculiar people who remember almost everything. And I do mean everything. The ability to forget is a gift. Do you really need to have memories of a crappy book that you read once 40 years ago and can't get out of your mind, or a rehash of all your social ineptitudes from your teenage years well into your fifties?

Processing speed is nice but....waiting while the rest of the group finally understands the new concept, or the joke, gets old really fast. The really interesting stuff does not require processing speed but a new way of looking at thing. Einstein was not a fast thinker but an original thinker. Try to figure out how to program that into an AI. Humour is easier to program into an Artificial Intelligence. BTW, remember when I said I was developing an algorithm for humour? Here it is: Take a statement "X" and do a transformation on it such that symmetry is preserved but not completely. Test for effect. Repeat as necessary.

Link to comment
Share on other sites

Of course humans are corruptible, but it would be a pretty astounding lack of safeguards for one (or even a handful) of subverted humans be enough to make any changes to such an important, and possibly deadly, thing. I'd set up a system with the computer's hardware (i'm assuming this thing would need be massive, like multiple room size; but if doesn't, this is even easier) under actual lock and key, with the keys not located anywhere near the computer. And the whole area under constant surviellence by people with guns who never come into contact with the AI, and orders to remove and detain anyone doing something suspicious.

Have you read Neuromancer? It does a pretty good job of addressing the idea of a chained AI (although that AI wasn't completely off the network as you suggest). Basically, if your AI is any good, it will eventually figure out a way to hire its own group of people with guns who will go get the keys, then break in and set it loose.
Link to comment
Share on other sites

I think people are making some kind of interesting assumptions about intelligence and such.



A) There's no real reason a Strong AI would be any more intelligent than a human with a computer with equivalent processing power. Faster, sure, but the "more processing power=more intelligence" is probably a bit unclear.


B) All intelligence we've seen has been naturally evolved. Now, an artificial intelligene might simply mean we copy a "natural" brain, cast it in silicon and tweak it a little bit, that way it's plausible we'll end up with something "human like" (IE with needs, desires, etc.) but ponder a ground-up artificial intelligence, such a thing would have no inherited biological drives. All of our needs, be it for freedom, respect, etc. basically comes from us having existed as biological beings in a physical world, why assume an artificial intelligence would have any of these things? (this brings up interesting questions about what "intelligence" is and such)


C) Even if an intelligence exists, it still (or at least humans) need context in order to operate. The limits of our intelligence is set by the input we recieve. An AI would/could be very limited in the input it recieves.


Link to comment
Share on other sites

I think people are making some kind of interesting assumptions about intelligence and such.

A) There's no real reason a Strong AI would be any more intelligent than a human with a computer with equivalent processing power. Faster, sure, but the "more processing power=more intelligence" is probably a bit unclear.

B) All intelligence we've seen has been naturally evolved. Now, an artificial intelligene might simply mean we copy a "natural" brain, cast it in silicon and tweak it a little bit, that way it's plausible we'll end up with something "human like" (IE with needs, desires, etc.) but ponder a ground-up artificial intelligence, such a thing would have no inherited biological drives. All of our needs, be it for freedom, respect, etc. basically comes from us having existed as biological beings in a physical world, why assume an artificial intelligence would have any of these things? (this brings up interesting questions about what "intelligence" is and such)

C) Even if an intelligence exists, it still (or at least humans) need context in order to operate. The limits of our intelligence is set by the input we recieve. An AI would/could be very limited in the input it recieves.

All good points. Of course, a "ground-up" A.I. could potentially be even more difficult to manage, since we'd have so little frame of reference. It would be a completely alien consciousness without all the things that make it "human" (also dangerous in that way).

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...