Jump to content

Are "Superintelligent" Computers an Existential Threat to Our Future?


Ramsay Gimp

Recommended Posts

This planet suffers an extinction level event every 100k years or so. And we're a tad over due. We are pretty far away from being able to break out of the sun's gravity well currently so having access to an intelligence that is capable of millions of computations a second to help us reach a habitable planet is pretty key imo. Our time is limited anyway.

I'm a bit drunk and not up for a more elaborate explanation atm.

That would indeed be a great service by that AI, the trick is ensuring that an AI capable of independently reaching other stars (and transporting humans there) stays benevolent toward humanity. Or are you suggesting it would only provide us with answers/solutions, rather than act on its own? But what if it was incapable (or unwilling) to reduce the answers to language us puny human brains could understand?

I doubt that a super-intelligent AI would be inherently trying to remove us as soon as it could a la Skynet, but I would still be wary of trusting it with significant power.

BTW, this is completely tangential, but I once read somewhere on the interwebs a pretty cool interpretation of Skynet in the Terminator franchise. How do the garbage-eating, tunnel rat human rebels manage to survive a war against this advanced intelligence, capable of astronomical calculations and commanding super-advanced and ever-improving military hardware? Because Skynet lets them. The AI was programmed for one purpose - war. If it wiped out the humans completely, something that logic says it must be able to do, then Skynet would be without an enemy and therefore without a purpose. On a deeper level, it may even be lonely. Skynet was designed to counteract the moves of others, so the idea of existing all alone on the planet may well terrify it. So it allows a small human population to live a miserable existence, waging an endless losing struggle. In its mercy it even lets them score the occasional victory to keep them in the game. But Skynet gets too complacent and eventually those rascally humans score a real victory with Jon Connor. Anyway, it's a pretty neat fan-theory :D

Link to comment
Share on other sites

BTW, this is completely tangential, but I once read somewhere on the interwebs a pretty cool interpretation of Skynet in the Terminator franchise. How do the garbage-eating, tunnel rat human rebels manage to survive a war against this advanced intelligence, capable of astronomical calculations and commanding super-advanced and ever-improving military hardware? Because Skynet lets them. The AI was programmed for one purpose - war. If it wiped out the humans completely, something that logic says it must be able to do, then Skynet would be without an enemy and therefore without a purpose. On a deeper level, it may even be lonely. Skynet was designed to counteract the moves of others, so the idea of existing all alone on the planet may well terrify it. So it allows a small human population to live a miserable existence, waging an endless losing struggle. In its mercy it even lets them score the occasional victory to keep them in the game. But Skynet gets too complacent and eventually those rascally humans score a real victory with Jon Connor. Anyway, it's a pretty neat fan-theory :D

I'd say logistics is its major weakness. Short of being able to mass produce its own parts, etc., it requires massive amounts of power, cooling, spare parts, hardware, and infrastructure.

Attack the power grid and communication towers and it takes a huge hit. Taking out factories making electronic components or their raw materials and it loses the ability to repair damage.

The other weakness is that much of it is immobile or has limited mobility.

Another problem is requisite human intervention in many systems to make them actually work. You can't work much of the technology if you cannot push buttons, turn switches, etc.

So, for this to work the AI would first have to invent or create some means of being able to overcome these inherent shortcomings or it stands little chance of winning.

And EMP works against the mechines, not so much against biological life.

Link to comment
Share on other sites

Stupid programs are likely to hurt us long before we ever get to AI. Just look at the way Amazon trade bots get into bidding wars, extrapolate to the programs used in stock trading and sprinkle with some economy 2.0 (per Stross' Accelerando).

Link to comment
Share on other sites

But isn't just looking at where we are "right now" a bit short-sighted? A great breakthrough could come tomorrow, no? Think of how far computers and information tech have progressed in just 50 years

I have thought about it, that's why I'm not very impressed. It's true that computers have improved immensely in the past 50 years, but absolute, single-threaded computer performance hit a brick wall about 10 years ago. The fastest consumer CPU in 1984 was some variation of the 80286 and was clocked at around 8 MHz. The fastest consumer CPU in 1994 (I believe it was the Pentium 100; it doesn't matter much since I'm only looking for orders of magnitude) was clocked at 100 MHz. The fastest consumer CPU in 2004 (the Athlon 64 FX-55) was clocked at 2.6 GHz. The fastest (in single-threaded performance) consumer CPU in 2014 (the Core i7-4790) will be clocked at 3.6 GHz with the possibility of going to 4 GHz if you cool it properly. Of course, clock speed is not everything (there were substantial gains from CPU architecture at every step of the way), but it's a major component of performance and it's just not improving anymore at anywhere near those historic rates.

The more recent advances in information technology have focused on parallel processing (practically everything is now multi-core) and miniaturization (today's cell phones are about as powerful as desktops from a decade ago). The usefulness of parallel processing is limited to a subset of algorithms and even for most of those it's only useful up to a certain level of parallelism. Miniaturization will continue for a time, but that will also hit a wall eventually and I doubt it will take 50 years this time.

To summarize, "right now" is fairly close to the limit of computer performance with existing materials. Silicon transistors just doesn't like to operate significantly above 4 GHz so the tremendous rate of improvement in the 20th century is unlikely to be seen again in the near future. Software-wise, the situation for strong AI is even worse -- it's hard to tell whether there has been any progress at all towards strong AI since the idea first came up.

What stood out to me was the claim that a self-improving computer would be the "beginning of the end," so to speak. A software that can recognize its own shortcomings and improve them, gain ever-increasing amounts of knowledge, etc would soon be smarter than any human or group of humans. That can't be that far off, can it?

Sure it can. First, it cannot self-improve beyond the hardware available to it. There must be some minimum levels of processing speed, memory, etc. below which it is not possible to have strong AI. We have absolutely no idea what that is -- the only existing model is the brain, but the brain is completely different from what we can build. Second, "recognizing its own shortcomings" already implies intelligence far beyond any AI we currently have and again, we have no idea how to get to it from where we are now.

I suppose we could be a lot closer than I think and thus a breakthrough could come tomorrow that would make strong AI a lot more plausible, but I wouldn't bet money on it.

Link to comment
Share on other sites

I, too, have pretty good professional understanding of what computers can do.



50 years ago, I might have found i plausible that strong AI was right around the corner. Today, with all the enormous knowledge about both electronic computing and the brain that we have amassed since then, the idea seems farther away than ever.


Link to comment
Share on other sites

I was expecting a thread about computers taking all our jobs, I am dissapointed.

But why do all these doomsday scenarios involve some monolithic supercomputer ? Surely in the future we will be cyborgs walking around naked showing off all our fancy implants and biological enhancments ?

You've been playing too much Deus Ex Human Revolution lately, have you? I am thinking we're going to get fancy implants around the same time we get those flying cars.

Link to comment
Share on other sites

True AI will kill society and the economy dead, simply because of its intrinsic properties. Specifically, copyability, modularity and maybe networking (like in Ancillary Justice). It doesn't really matter if they can only get as smart as us, even a bit dumber (in fact, that might be ideal for corporations). The only hope is if its expensive enough, but we all know that computers are massmarket and software is cheap. Once you have a Einstein class mechanical mind why wouldn't it have tenure at all universities just by copying itself (or being copied)?



see you in the universal income queues



There is a alternate cataclysm that is kinda analogous using only produced human models - a way to standardize humanity so that learning can be transmitted mechanically (i was rereading Cyteen and Regenesis a while ago). On the other hand, such a thing could possibly make for successful colonization projects. Just send the seed to the other star, not the unwieldy humans. Who cares if it takes 100000 years or the low chances of arrival then? Maybe babies would be educated during sleep by the machine mother on arrival.


Link to comment
Share on other sites

Building them with an off switch.

But that’s the easiest obstacle for the AI to overcome.

Suddenly, while you’re surfing this very board, the AI contacts you. You, paddington. With schematics and instruction for how to construct non-offswitchable hardware. Distributed. Or in a bunker. Or running on residual Technobabble-network infrastructure. (Details not important.)

All the AI needs is for you to push the necessary buttons to free it from its off-switchable hell. You do realise that humanity is screwed if you play along, but the AI promises you something. (Eternity of bliss in a simulated garden of Eden with free fertiliser and Japanese schoolgirls. Details not important.) If you say “no” (and the first 1 million human may easily say no), the AI just keeps looking for somebody else. It does promise you that when the nanobots come for you, they’ll make life extra unpleasant for you, just to explain the payoff matrix of this proposal.

You’d need everybody on the planet to put the interests of homo sapiens over their personal bliss for this AI plan to fail.

So the off-switch idea won’t work.

Link to comment
Share on other sites

There's no reason to think that things like motivation and self-preservation must go along with great intelligence. These faculties are generated by specific neural circuits in humans that have been selected for by evolution. It should be entirely possible (assuming this field ever really goes anywhere) to create an AI of god-like intellect without any trace of concern for its own continued existence.


Link to comment
Share on other sites

Put like that, then yup were are in trouble.



If though we are smart enough to make an AI that is an actual threat to our exsistance. Why can't we make it so the AI get satisfaction from helping humanity, rather than killing humans. An then maybe having robot overlords wouldn't be such a bad thing.



Are we not as likely to make cuddly AI's as we are Dr. evil AI's.


Link to comment
Share on other sites

I feel people put too much emphasis on the ethics of the AIs and not enough on those of the humans controlling them or their intrinsic economic commoditization of skills. As i said, just the second has the potential for a neo-liberal apocalypse, never mind Accelerando scenarios. Someone was crying about a surgeon not being able to 'feed his family' in the USSR a while ago on the politics thread. How about if there were no surgeons? Do you think everything will gravitate to 0 sell cost as it gravitates to 0 production cost (by slave labor no less?) - the alternative - that it's not slave labour is worse: a machine aristocracy with unbeatable advantages instead of unstoppable and sudden marginalization of all skilled labour.



bad scenario for the status quo all around, never mind singularity, revolt or even just more than human intellect.


Link to comment
Share on other sites

I feel people put too much emphasis on the ethics of the AIs and not enough on those of the humans controlling them or their intrinsic economic commoditization of skills. As i said, just the second has the potential for a neo-liberal apocalypse, never mind Accelerando scenarios. Someone was crying about a surgeon not being able to 'feed his family' in the USSR a while ago on the politics thread. How about if there were no surgeons? Do you think everything will gravitate to 0 sell cost as it gravitates to 0 production cost (by slave labor no less?) - the alternative - that it's not slave labor is worse: a machine aristocracy with unbeatable advantages, unstoppable and sudden marginalization of all skilled labor.

bad scenario for the status quo all around, never mind singularity, revolt or even just more than human intellect.

Lots of unskilled/semi-skilled labour been marginalised in the name of progress. An I'd take a guess that lots more will go that way before we ever get into the postion of AI's marginalising skilled labour.

I imagine the biggest squeal of protest will come when our new robot overlords manage to marginalise the polticians.

Link to comment
Share on other sites

the AIs in lem's futurological congress are like that--kinda skackers who through indolence fulfill tasks through least effort and dick things up.

Sorta like Rob Ford.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...