Jump to content

Are "Superintelligent" Computers an Existential Threat to Our Future?


Ramsay Gimp

Recommended Posts

All good points. Of course, a "ground-up" A.I. could potentially be even more difficult to manage, since we'd have so little frame of reference. It would be a completely alien consciousness without all the things that make it "human" (also dangerous in that way).

Wouldn't it be really boring to exterminate all human life though? I mean, what would the AI do? Play chess with itself? :p

Link to comment
Share on other sites

Have you read Neuromancer? It does a pretty good job of addressing the idea of a chained AI (although that AI wasn't completely off the network as you suggest). Basically, if your AI is any good, it will eventually figure out a way to hire its own group of people with guns who will go get the keys, then break in and set it loose.

I haven't. And I agree that if you give the AI any opportunities, it'll figure something out if it wants to; which is why I'm saying there'd need to be a system where there's just no connections for the AI to take advantage. Just a flash drive, a keyboard, and a camera; with everything locked up. And a whole lot of monitoring of every interaction people have with any of those.

Sure this sort of arrangement would probably bring up ethical concerns about pre-emptively confining an intelligent thing because of the danger it only might represent, but in this case I'd be okay with that.

Link to comment
Share on other sites

Also who says the AI needs to be able to speak a human language? A significant amount of what an AI would do would work just fine if it gave us the information using numbers. Though that would probably be hard to do.


Link to comment
Share on other sites

I'm more interested in a computer that can feel the way we feel in addition to making logical decisions.

Yes, that is a more interesting idea. Although, then you'd have depressed AIs and jealous AIs and then we'll basically enter Mass Effect 3 universe and fans everywhere will be deeply unhappy.

Plus we'd actually be sorta kinda ruled by giant telepathic squid and that thought is really repulsive.

Link to comment
Share on other sites

I'm more interested in a computer that can feel the way we feel in addition to making logical decisions.

AI will only ever be able to emulate us, no matter how amazing or powerful... it will be a cheap knockoff of us; with a central processing unit for a brain, and computer code for a soul.

I'll go with philosopher Jean-Paul Sartre, "existence precedes essence"... real experience is everything. You can't download it, copy it, imagine it. You must experience life to know life. An AI can never know life.

Link to comment
Share on other sites

Artificial intelligence cannot replicate human consciousness, say Irish researchers in new study.





In a recently published paper, "Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory," Phil Maguire, co-director of the BSc degree in computational thinking at National University of Ireland, Maynooth, and his co-authors demonstrate that, within the model of consciousness proposed by Giulio Tononi, the integrated information in our brains cannot be modeled by computers.



Consciousness is not well understood. But Giulio Tononi, a psychiatrist and neuroscientist at the University of Wisconsin, Madison, has proposed an integrated information theory (IIT) of consciousness. IIT is not universally accepted, nor does it offer a definitive map of the mind. Nonetheless, it is well regarded as a model for consciousness and has proven valuable in understanding how to treat patients in comas or other states of diminished consciousness.



Link to comment
Share on other sites

Wouldn't it be really boring to exterminate all human life though? I mean, what would the AI do? Play chess with itself? :P

In other words, playing with itself.

Link to comment
Share on other sites

Very cool. Thanks for posting. I’ll read it.

(Cute result.)

Will be interested in seeing what you think. Amusignly enough Searle critiqued IIT for being too much akin to computational theories of mind.

I'll go with philosopher Jean-Paul Sartre, "existence precedes essence"... real experience is everything. You can't download it, copy it, imagine it. You must experience life to know life. An AI can never know life.

It's actually interesting where assumption of primacy of experience divorced from representation/process takes us. For example, I'd posit computers don't remember anything and instead just manipulate symbols. We decided what binary sequences refer to ASCII codes, JPEGs, and so on - as such we're the ones with real memory.

This leads to some weird questions about how the brain remembers anything, if we can't use external examples of physical traces [for analogies]....

Link to comment
Share on other sites

AI will only ever be able to emulate us, no matter how amazing or powerful... it will be a cheap knockoff of us; with a central processing unit for a brain, and computer code for a soul.

I'll go with philosopher Jean-Paul Sartre, "existence precedes essence"... real experience is everything. You can't download it, copy it, imagine it. You must experience life to know life. An AI can never know life.

I'd actually argue the opposite. (and I think Sartré agrees) an AI that experiences life lives: If it lives it experiences. It's a mind if it does what a mind does.

Link to comment
Share on other sites

I'd actually argue the opposite. (and I think Sartré agrees) an AI that experiences life lives: If it lives it experiences. It's a mind if it does what a mind does.

Seems to me computers never do what minds do, since the whole concept of a computer requires an external mind to make the patterns on the screen intelligble. Computers are "prostheses for the mind"* rather than minds themselves.

If people a billion years from now came across a working laptop they could attach whole new meanings to the user-interface, to the symbols of language, and what is happening when the user clicks on something.

*Calasso, Literature and the Gods

Link to comment
Share on other sites

But that's true of human language also.

Isn't it all human language, since programs don't comprehend meaning?

Don't get me wrong - this isn't an argument for a soul. I think maybe androids could be conscious. But the existence of a computer is based on our separation of particular parts of the universe from other parts ->

Depending on how it looks and their inclinations, the hypothetical humans existing a billion years from now might regard the computer and the surrounding enviroment (vines, bugs, weather, moon, etc) all part of one system.

Link to comment
Share on other sites

Computers make decisions based on input data (in their case, if it's derived from a series of binary conditions - if 1, then something, if 0 then something else). I'd consider that a mind on at least some level.



(Though I wouldn't want a computer to emulate human feelings. Imagine if your laptop decided not to run today because it was getting bored, or was feeling malicious, so changed the words of some random document).


Link to comment
Share on other sites

Just because its a single computer/a whole lot of connected computers not connected to anything else, doesn't mean it couldn't be tremendously helpful in problem-solving and data analysis. Just think of all the math it could do! Assuming it wants to do math anyway.

But my point is, I don't see why an AI is an inherently a threat. If it can physically affect things, sure; but if we control that, how does this all end Skynet style?

ETA:

Absolutely. Its a potentially hostile-species, why wouldn't we quarantine it? Give the computer a camera, a microphone, a flash drive, and a keyboard, and let those be the only way inputs get in.

I'd even be ok with limiting it to the capabilities of HAL 9000. The potential benefits for space travel could be enormous, and if the AI goes haywire you only lose the crew. Sad, but space travel is inherently dangerous anyway.

AI with very limited power over certain isolated functions could be very useful, even if there was a risk to the humans immediately around it. The real danger you and the article are touching on is when you have a sentient/smart/conscious program connected to everything

Wouldn't it be really boring to exterminate all human life though? I mean, what would the AI do? Play chess with itself? :P

Hence the theory I posted earlier about Skynet letting the weak human rebels survive. It must always have an enemy to fight, and it knows this

Link to comment
Share on other sites

Will be interested in seeing what you think.

It’s a valid result in algorithmic information theory. (I happen to know this kind of computer science very well and understand the paper completely. I think I met one of the authors.)

But there are no implications for what we discuss here. The paper merely proves that if you define a particular type of (in some sense “maximally distributed”) data compression then you cannot manipulate that kind of compressed data (in the sense of the manipulation being “uncomputable”). “Computability” is a specific concept in the theory of computation – no “computer” (which used to mean “human computer” when “computer” was still a job description for humans but now mostly is understood as “electronic computer”) or other formally definable symbol manipulation process is able to produced the correct output on all inputs.

The proof of the result (Theorem 3) is completely obvious from these definitions, so the result is utterly unsurprising, once the definitions have been set up the way they are. (But don’t misunderstand me: I like this paper.)

Apparently, earlier some neuroscientist (Tononi) has defined consciousness as this specific form of maximally distributed data compression. Thus, Tononi’s (artificial) concept of consciousness has the (equally artificial) property of being uncomputable.

Nothing to see her, except for people how like algorithmic information theory (such as me).

Link to comment
Share on other sites

I'm more interested in a computer that can feel the way we feel in addition to making logical decisions.

That seems impossible by definition. If we could make purely rational decisions then we wouldn't feel how we feel would we?

Link to comment
Share on other sites

That seems impossible by definition. If we could make purely rational decisions then we wouldn't feel how we feel would we?

but I think people are *capable* of making purely rational decisions separate from how they feel. It's not usually how it works but we have the capacity. People who do so are usually referred to as cutthroats in my experience. But, hypothetically, if an AI could be created just like us, it could do this as well?

Link to comment
Share on other sites

LitA,

I disagree. Emotion always plays some role even if you are suppresing an emotion to push on to a "rational" decision. The only way to be "purely rational" is to eliminate emotion. That's simply not possible if we had no emotion part of what makes us human would be gone and we would be something else at that point.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...