Jump to content

Are "Superintelligent" Computers an Existential Threat to Our Future?


Ramsay Gimp

Recommended Posts

I meant "friendly" as in the sense of an AI that actually gives a damn about humans when calculating its own utility functions.

The other reason to torture your simulation - or possible your own flesh and blood self if you've cryogenically frozen your body is that the AI is probably not going to have this sort of power all at once.

Torturing you - or the simulation that is supposedly you - is incentive for people in the time it's extant to help increase it's reach and ability.

That's a more valid scenario. Though then the choice only applies to the people who are alive during the time the AI is becoming operational, not anyone reading that website link.

Plus if you are dealing with an AI that can simulate entire universes it could probably trick those people into believing that living simulations were being tortured, without actually doing so.

Link to comment
Share on other sites

That's a more valid scenario. Though then the choice only applies to the people who are alive during the time the AI is becoming operational, not anyone reading that website link.

Plus if you are dealing with an AI that can simulate entire universes it could probably trick those people into believing that living simulations were being tortured, without actually doing so.

Well, it's not clear to me that a rational acting AI should deceive humans about this sort of thing.

In fact the best way to motivate people in the time it exists would be to provide conclusive evidence that these people who could have helped but didn't are being tortured.

Of course, when we start worrying about AIs simulating universes AFAICTell we're riding off into a special kind of la-la land.

Link to comment
Share on other sites

I think The Matrix missed something by making the VR-opiated humans little more than batteries for the A.I.



It would make more sense that the the network A.I was actually harnessing and collectivizing their consciousnesses, that in a sense it was all of the humans, given a global, machine rationality. So while the A.I. seems to be a terrifying other, it is really just the self, manifold.



Dissent by an individual becomes akin to a stray thought or a doubt flowing against the global stream of intention.



ETA: Man vs Machine becomes individual vs collective instead. Or a cell vs the whole body, where a rebellion of cells becomes a cancer for the whole organism. The triumph of individuality in the end would be regressive and a kind of Deicide.


Link to comment
Share on other sites

COGITO ERGO SUM

ha ;)

If it smells like bread, feels like bread and tastes like bread (+ gives me Energy like bread) it is bread

ha ;)

per logic, a simulation which is EXACTLY the same as the real stuff, is no simulation anymore.

all this happened before, all this will hapen again...

Link to comment
Share on other sites

o.k., the people who are so smart that they buy into such crazy stuff probably deserve this kind of anxiety to start reflecting a little more (in different directions) :cheers:

If religion does not work anymore to help with fear of death, people come up with storing their "souls" on harddisk forever. It seems that hell and purgatory then also show up in some tech-friendly versions...

But it seems that the high likelihood of the all-powerful AI is a central premiss. Otherwise one could react to the danger by trying to lower the probability of its appearance, e.g. by only constructing AIs that obey something like the Laws of Robotics (which do not use utilitarian calculus, but have non-negotiable rules that forbid harming a human). Or by outright Luddism wrt to any AI smarter than a cash machine.

Hmmm... Reminds me of Iain Banks' novel Surface Detail.Time to reread it.

Link to comment
Share on other sites

  • 2 months later...

“We’re summoning the demon”: Elon Musk warns about artificial intelligence

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is — it’s probably that. So we need to be very careful with artificial intelligence.

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out. [audience laughs]“

Link to comment
Share on other sites

Well AIs that possess intrinsic intentionality still seems like a ridiculous fantasy to me, so I don't worry about this sort of thing.



What's interesting is the cultural factors that go into this religious fetish of AI, whether that means fearing it or worshiping it. I think before we worry about AI we should double check our ideas regarding unchecked population growth, pollution, peak oil, etc - but those practical considerations aren't as useful for click-bait.


Link to comment
Share on other sites

But AI can solve all that stuff for us!! Have you ever tried to beat ChessBot???? It's impossible!!!!

Heh - it's funny to see the overlap between New Atheist types who bash religion and Singularity adherents who think if only religion was gone we'd live in a technological utopia where they could live forever on one of Google's servers or somesuch.

Link to comment
Share on other sites

Sci,

That only works if our brains are mere organic computers and memory storage devices.

Remember that even if the brain is merely organic it doesn't mean we can upload our consciousness.

The conflation of compuationalism with materialism, and thus detractors with dualism, is IMO one of the ways this uploading nonsense has gotten as much traction as it has.

Link to comment
Share on other sites

The most dangerous use of "AI" may be the microseconds stock trading and similar stuff. Some economical crash due to such madness could seriously affect us long before Kurzweil gets his brain on liquid nitrogen. Not to mention coarser things like peak energy, climate change, social unrest, overpopulation etc. We will probably switch off the freezer with Kurzweil's brain because we will have better uses for precious energy...


Link to comment
Share on other sites

The most dangerous use of "AI" may be the microseconds stock trading and similar stuff. Some economical crash due to such madness could seriously affect us long before Kurzweil gets his brain on liquid nitrogen. Not to mention coarser things like peak energy, climate change, social unrest, overpopulation etc.

Yeah, it amazes me there are people out there worried about the civil rights of future AIs when it's not clear we're going to find a solution to the energy crisis that allows the world to continue at its present level of consumption.

On the AIs causing recessions...Would a lot of (or all of?) the major firms would have be using ill-tested AIs competing against one another? Seems like something they'd do, especially given the fines - which IIRC they often don't end up paying - are just a pittance compared to the profits made.

We will probably switch off the freezer with Kurzweil's brain because we will have better uses for precious energy...

:lol:

Link to comment
Share on other sites

Jo,

Aren't you speaking of "weak" not "strong" AI?

I was actually referring to the trading programs active NOW. They probably do not qualify even as "weak" AI. Admittedly I am not an expert on this stuff. But I read/glanced through a lot of the stuff in the late 90s when I attended a university seminar about some of those themes (the basic book was by Canadian philosopher John Leslie. I also attended one on neural networks, replicators and such things). Since then the dates for the rapture singularity have been postponed all the time and nanotech seems to be restricted to better outdoor clothing. (Back then the parallels to religion were already obvious, anyone remember Frank Tipler's "omega point"?)

I am not denying that some of those things can be dangerous (there are SF stories about mindless replicators destroying everything in the neighborhood from decades ago, maybe even from the late 19th century). Certainly people like Yudkowsky are right to point out some of these dangers. These dangers are not dependent on whether those "intelligences" will be conscious (or semi-conscious or whatever). We simply do not know whether an AI of the complexity of our brain will be semi-conscious, be able to have good or bad intentions etc. All we have are extrapolation from ballpark guesses about computational power. Read what Tom Murphy writes about "ruthless extrapolations" and look at the poll he took among physicists wrt to some futuristic ideas. (I linked his blog (Do the Math) above or in another recent thread.)

Some very smart people are surprisingly blind to obvious things and/or they believe that because something is "possible in principle" we will get there.

Neither do we know if a disembodied existence on the basis of a brain scan will be something the person whose brain was thus "immortalized" will even recognize or find a "life" worth living (here I grant that something like such a brain scan and transfer of the whole to another medium is even possible which I do only for arguments sake). I am pretty sure there are SF stories about people replacing bits of there brain peu a peu with artificial tools and by this process becoming less human.

And as Sci pointed out people do not seem to realize that computationalism does not follow from materialism. Strong multiple realization wrt to mental states, that is mental states could (not only in principle, but in fact) realized in silico and have ALL the qualitative, conscious, subjective etc. features as in vivo is not at all implied by materialism. (Neither by "functionalism" although this is often claimed) Maybe consciousness is really bound to biology and not at all realizable in a structure that is in some ways isomorphic. I'd say that we simply do not know that. IIRC McGill professor Mario Bunge, who is a materialist, denies or at least doubts the possibility of multiple realization; there is a book by him and Martin Mahner on Philosophy of Biology.

Computationalism has dualist features; in some way it is a stronger form of dualism than Aristotelian form-matter-theory, because in the latter it seems very doubtful that the "soul" (including consciousness) could exist in a completely different matter. That's why the traditional church teaching needs bodily resurrection for eternal life and this restoration is something miraculous! The soul can but lead an incomplete shadowy existence without a proper body. (Probably it would be more consistent to say that it cannot exist at all without the body and this seems to be the position held by pre-Christian Aristotelians.)

Computationalism seems to claim that "the soul" could exist in almost any material substrate as long as it was structurally somewhat isomorphous to our brains or maybe not even that as long as there was sufficient complexity and some features. I think we should pause once in while to remind ourselves that this is an extremely strong thesis. One should not go into psychologizing, but I think that the fascination these themes have for some smart people say much more about these people than about the feasibility of such things...

Link to comment
Share on other sites

Jo,

That was a wonderful breakdown of some of the problems of Computationalism. It puts me in mind of the idea of quantum teleportation. Where it is possible to travel long distances by reproducting the quantum states of every particle of matter in your body but only by destroying your original body. Some materialists may say all we are is those quantum states and therefore after transmission the "you" that is transmitted is you.

We've only successfully quantum teleported individual particles and so the deeper philisophical questions about "sameness" are not put to rest by those experiments. However, if we are ever able to use quantum teleportation to move larger and more complex things I wonder if those questions will be settled by that technology?

Link to comment
Share on other sites

We do not even know how quantum theory can be applied to macroscopic things. Although there are "macroscopic" quantum phenomena like Lasers and supraconduction and suprafluidity these are exceedingly simple compared to even a biochemical macromolecule, I'd say, not to speak of a living organism.


I would have to read up on "quantum teleportation" to say anything about that. Only glancing at the wikipedia article I think it is misleadingly named. Not anything material is teleported, but "information" regarding quantum states is conveyed without "moving" the respective particles. And it does not even work with molecules! It cannot be used to transmit information with superluminal speed (because for some reason a "classical" (non quantum) communication channel has to operate parallel to th quantum channel.


So while there might be some application some day, teleportation is all but a misnomer and noone should be encouraged to have StarTrek fantasies about beaming people (or "states of people"). For one thing, there would have to be the proper atoms that make up a human being present at the goal point for the "teleported" information to be "imprinted" on. So you need a heap of meat, or actually a lot of water, some carbon etc. to imprint the form of Kirk on this matter on the foreign planet.


Kirk would also have to be in an "entangled state" with the goal which is probably impossible for macroscopic things.


Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...