Jump to content

Are "Superintelligent" Computers an Existential Threat to Our Future?


Ramsay Gimp

Recommended Posts

Jo,

But isn't the point of Computationalism that all consciousness is information and if the information is tranferred consciousness should flow with it? Which leads us back to the rabbit hole of attempting to define sapient consciousnesses.

Isn't this then a philosophical question? If you invent teleportation and it works the way you think, how do you then decide on "sameness" or consciousness?]

I mean, it's likely that you'll know some basic stuff, like there isn't a single soul that dies with the body, but survival or consciousness is still a matter of definition right? How would the some physical fact change that?

Link to comment
Share on other sites

Castel,

Isn't everything, to some extent, a "Philosophical question". My point is this if QT works for complex matter but not for conscious entities (the body moves but arrives dead) doesn't that, at least, suggest there is something to consciousness that is more than informational?

Link to comment
Share on other sites

Castel,

Isn't everything, to some extent, a "Philosophical question". My point is this if QT works for complex matter but not for conscious entities (the body moves but arrives dead) doesn't that, at least, suggest there is something to consciousness that is more than informational?

Ha, that implies that there is something more to life than it seems. Vitalism is due a comeback :)

But, to take your intended meaning: sure. I noted that. I suppose it's my bias: I skip straight to the generally accepted outcome in pop culture and don't think that it'll necessarily change that much.

Link to comment
Share on other sites

Regardless of whether teleportation works in transferring consciousness (how could you tell?) computationalism is false because it assumes simulating a thing on a Turing Machine suffices to replicate the internal essence (in this case, consciousness) of that thing.



Posted this awhile back in another thread, but I think Lanier makes a good argument for why this isn't plausible.


Link to comment
Share on other sites

@Sci, @ Ser Scot-



I'm afraid I'm going to repeat myself here. Essentially if consciousness is not materially determined, it is in effect "possessing" our brain-body. The mechanistic acuity of the individual brain's structure would determine what is commonly referred to as intelligence or I.Q., but is not itself the source of consciousness.



Therefore the possibility of true A.I. would lie not in its ability to simulate intelligence through computation but rather its suitability as a host for consciousness. This, if possible, is the only breakthrough that would allow for true A.I. at any level. Only once this was achieved could we talk about augmenting the mechanistic computational power in a way that would put it at a level categorically above humans.



RE: teleportation-



If we accept that consciousness is a non-local phenomenon, then it shouldn't matter to consciousness where the body or substrate is. So consciousness should recognize its "host" whether it suddenly re-appears on the other side of the universe or not. Consciousness is already everywhere and the host is just a temporary mechanistic subdivision of it.


Link to comment
Share on other sites

I accept the possibility of non-local consciousness, but it really leaves us in the same place as before AFAICTell.



Any of the beliefs of the computer program would be reducible to machine code, which could then be re-translated upward into a completely different set of beliefs. There's no intrinsic meaning in a program.



The only way I can see mind uploading being a real possibility is if consciousness is somehow dependent on certain quantum vibrations like Penrose & Hammeroff have suggested. Hammeroff noted that a physical lattice might be able to carry the vibrations from the brain to an artificial structure which would then ensure continuity of consciousness.



While farfetched (IMO at least) this does ensure continuity of something physical rather than confusing simulation with actuality.


Link to comment
Share on other sites

Any of the beliefs of the computer program would be reducible to machine code, which could then be re-translated upward into a completely different set of beliefs. There's no intrinsic meaning in a program.

I'm not sure what you mean here. Is there an intrinsic meaning to a network of neurons and synapses? Not when they are dead, at least (that is, not imbued with consciousness). The question is, could a non-local quantum field (if that's what consciousness is) interact with programs? Binary digits are mathematical entities, but aren't they also tangible ones, embodied and manipulated as voltages in electonic circuits?

ETA: Are you saying with the above that software couldn't interact with a quantum field but a brain could because it is essentially hardware?

The only way I can see mind uploading being a real possibility is if consciousness is somehow dependent on certain quantum vibrations like Penrose & Hammeroff have suggested. Hammeroff noted that a physical lattice might be able to carry the vibrations from the brain to an artificial structure which would then ensure continuity of consciousness.

Maybe consciousness-compatible-computing would require a different, perhaps more cumbersome hardware design to concretize information (though how concrete are we talking about when we're talking about electrons?).

Perhaps in order for something to be conscious it has to be living; that is, before we could talk about creating Artificial Intelligence we would have to create Artificial Life, which as far as I know hasn't been done.

Is biologically or cybernetically augmented intelligence a more plausible route to super-intelligence than "artificial" or non-biological intelligence?

Link to comment
Share on other sites

WS,

Can we define "life"? Hasn't there been debate about whether or not viruses (strings of RNA hijacking more complex cells to reproduce themselves) are actually "alive"?

A virus has certain features of life but not others...if you accept a virus as life you might start looking at some types of crystal formations...

Let's say we arbitrarily put the threshold at a living cell. No one has been able to change non-living materials into a living cell, correct?

Though if you accept the Idealist model of the universe, that consciousness is the ground of being, then you are accepting a conscious universe wherein "inanimate" objects have some level of consciousness as well (and yes, I realize this is too much for most people to swallow.) But if consciousness does not come from the brain, then it is just partially inhabiting the brain. So how the hell do we know what kinds of things it can and can't inhabit? If consciousness is an all-pervasive quantum field, why would it not interact with the electrons constituting binary computer language? Is the simulation created by a machine language fundamentally different than the simulation of experience created by the brain, or is this difference entirely due to the non-presence and presence, respectively, of externally arising consciousness?

Maybe "life" as we currently define it is only a metaphor for what a true A.I. would be. I would think any entity imbued with consciousness and intelligence would be alive regardless of how it were physically constituted.

Link to comment
Share on other sites

@WS:



My understanding of Idealism was that not everything possesses awareness, but rather everything is in Mind which is thus the ontological primitive.



The Idealists who I've talked to about it compare it to a lucid dream.


Link to comment
Share on other sites

WS,

I think that's an excellent question and one that is next to impossible to answer. I'm thinking on it and all I can come up with to distinguish consciousness in organic organisms and computers is some degree of independence of action. But that's an external not an internal definition for what constitutes a living consciousness.

Does a human without the capacity to interact with their environment but a fully functioning brain possess consciousness. I would say yes, but under my definition, the answer would be no. This is a really thorney issue.

An AI would need to be able, in my opinion, to initiate new actions beyond its original programing and hardware to be able to be truely defined as "conscious". But if that is the case is a human infant, under that definition "conscious". The infant may possess the possiblity of consciousness but is the child "conscious"?

Link to comment
Share on other sites

@WS:

My understanding of Idealism was that not everything possesses awareness, but rather everything is in Mind which is thus the ontological primitive.

The Idealists who I've talked to about it compare it to a lucid dream.

I'll need you to unpack these concepts a little before I can respond in full. I would agree that the universe is Mind (clearly not the same as the solipsism of the universe being "just in your mind")

I'd still invite you to replace the phrase "everything possesses awareness" with "everything is possessed by awareness" even if you are in disagreement with the idea. And what would awareness or consciousness mean w/r/to something we generally consider inanimate, or without intelligence? Is there such a thing as "inarticulate awareness"? Could a rock or planet be an exponent of a macrocosmic intelligent system in a way analogous to a quark or string or wavicle being part of this conscious quantum field?

Do you mean with the last sentence that the universe is a lucid dream of Mind or am I misinterpreting you?

RE: A.I. - I'll agree with you that mere growth of computational power will not inevitably lead to true A.I., i.e., an entity with intention. It's possible computer scientists are progressing into a blind alley in this respect.

WS,

I think that's an excellent question and one that is next to impossible to answer. I'm thinking on it and all I can come up with to distinguish consciousness in organic organisms and computers is some degree of independence of action. But that's an external not an internal definition for what constitutes a living consciousness.

Does a human without the capacity to interact with their environment but a fully functioning brain possess consciousness. I would say yes, but under my definition, the answer would be no. This is a really thorney issue.

An AI would need to be able, in my opinion, to initiate new actions beyond its original programing and hardware to be able to be truely defined as "conscious". But if that is the case is a human infant, under that definition "conscious". The infant may possess the possiblity of consciousness but is the child "conscious"?

Again, "is possessed by" rather than "possesses" consciousness.

I think the essential word here is intention. Very shortly after birth, an infant has intentions or desires, just little capacity to realize those intentions itself. And I think we can agree that continued increases in computational power alone will still lack intention.

Link to comment
Share on other sites

WS,

Indeed, "intention" does seem to be the key here. Can a computer possess "intention". Can a program create "intention" in a computer? I don't know that it can.

But isn't this where the advocates of the Turing Test claim that simulating "intention" and "intention" have no discernable difference?

Link to comment
Share on other sites

But isn't this where the advocates of the Turing Test claim that simulating "intention" and "intention" have no discernable difference?

Yes, it is:

The Intuitional Problem of Consciousness

Where I think this runs into a wall is intentionality rather than intention. Why should a series of 0s and 1s have some intrinsic meaning? Translated on one machine (real or virtual) the uploaded mind might think "I love butterflies" but translated in another way the "thought" would mean "When is Ren & Stimpy on?".

As noted earlier this is not an argument for a soul. A physical reproduction of our minds could potentially possess consciousness, while AFAICTell a computer program cannot.

Link to comment
Share on other sites

WS,

Indeed, "intention" does seem to be the key here. Can a computer possess "intention". Can a program create "intention" in a computer? I don't know that it can.

But isn't this where the advocates of the Turing Test claim that simulating "intention" and "intention" have no discernable difference?

Right. Though I would emphasize, no externally discernable difference. Leaving us in the unprovable zone of subjective experience. So while we could print a list of operations/commands describing the computer's actions that simulated intention, we cannot do the same for ourselves.

Still, the Turingist (?) would say that our inability to map a Rube Goldberg-esque chain of events leading to words/actions that seem to demonstrate intention does not prove that we are not ourselves, computers of a sort. But refusing to take into account subjective experience just seems silly. Unless for some reason our brain is using a lot of extra processing power to delude us into thinking we have desires/ intentions.

Still, I refuse to say that Artificial Intelligence is impossible. I would say, unlike Kurzweil, that it is not inevitable.

Link to comment
Share on other sites

Yes, it is:

The Intuitional Problem of Consciousness

Where I think this runs into a wall is intentionality rather than intention. Why should a series of 0s and 1s have some intrinsic meaning? Translated on one machine (real or virtual) the uploaded mind might think "I love butterflies" but translated in another way the "thought" would mean "When is Ren & Stimpy on?".

As noted earlier this is not an argument for a soul. A physical reproduction of our minds could potentially possess consciousness, while AFAICTell a computer program cannot.

When I click on the link I find that I am eliminated by "Empirical assumption 1"

Empirical assumption 1: I assume naturalism. If your objection to computationalism comes from a belief that you have a supernatural soul anchored to your brain, this discussion is simply not for you.

Though I would object to the term "supernatural", my objection is based on the expansiveness of Mind outside of the biological brain. Close enough, glad to find the discussion is not for me. Thank you, person who is so much smarter than me!

RE: Ren & Stimpy vs. Butterflies - Would the most sophisticated MRI reveal a difference in the brain between these two thoughts? Even if we were to stipulate a future scan (nanobots with scanners permeating every cell of the brain) we would find that there is not any universal pattern of synapses firing that always equals "When is Ren & Stimpy on?"

If the map from pattern to meaning changes between individual brains, or over time, isn't that the same problem? In other words, neural connections also lack intrinsic meaning, do they not?

Link to comment
Share on other sites

Empirical Assumption two:

Didn't Turing and Godel prove that this assumption is false?

No idea, but Heisenberg did. And our understanding of chaotic systems disproves it even more.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...