Jump to content

Are "Superintelligent" Computers an Existential Threat to Our Future?


Ramsay Gimp

Recommended Posts

I have absolutely no credentials here to be clear, but I think the answer to your final question could be some of both. The uncertainty is part of the problem. We've never been here before, so we can only theorize about what could happen, but a lot of smart people do seem genuinely worried. I like the point about not anthropomorphizing AI too much. The computer tries to do its job which is to win at chess, but what if it believes wiping out humanity is how it wins at chess?

The Aeon article that was linked in the Atlantic article is here.

Again, sorta like a few politicians I could name, now that we are in the middle of a provincial election up here in Ontario.

Link to comment
Share on other sites

\Why can't we make it so the AI get satisfaction from helping humanity, rather than killing humans.

This is the core of the matter. How to define values so as to make them consistent with ours. Because otherwise, even the AI whose sole purpose was to win at chess is going to kill you. (After all, our protons can be put to better use, by incorporating them into some circuitry for really good endgames with two knights and a pawn.)

Note that we cannot model the AI on our own values. (Were we the AI, we’d wipe us out. That’s what humans do, along with all other creatures molded by evolution.)

So we need to come up with a motivational structure for a sentient entity that we have no model for. Nothing we’ve ever contemplated works.

(Mandatory Bakker reference: Bakker has thought this through, and presents a solution in his epic fantasy series The Second Apocalypse.)

Link to comment
Share on other sites

Why does an AI just have to be our intellectual better, why not as well as working on how smart it is, work on its emotional/social aspect as well. Build a super chess playing AI of just cold ruthless logic and it may well win by murdering its opponent. Build one that understands playing chess for fun and social interaction, and just the joy of playing, may let us win every now and again.



If we can build a truely intellegent machine, than it's not to much of a stretch to program one that values the better human traits as well.


Link to comment
Share on other sites

Why does an AI just have to be our intellectual better, why not as well as working on how smart it is, work on its emotional/social aspect as well. Build a super chess playing AI of just cold ruthless logic and it may well win by murdering its opponent. Build one that understands playing chess for fun and social interaction, and just the joy of playing may let us win every now and again.

If we can build a truely intellegent machine, than it's not to much of a stretch to program one that values the better human traits as well.

I now have this image of designing an AI for each of the sixteen Myers-Briggs personality types. You'd just have to programme some of them to act drunk and/or stoned.

Link to comment
Share on other sites

I like how we assume that anything more intelligent than us would be just as ruthless, petty, and destructive.

Because everything is. Not just us. Designing friendly AI is a lot harder than designing AI.

We basically know how to build new intelligences (currently we use sex for that.) We have absolutely no idea how to build new intelligences that will have the interests of all of humanity as its primary constraint. We can’t even write down what that means. (What if Alice and Bob have different, opposing goals. Which one is the AI supposed to help?)

Link to comment
Share on other sites

I like how we assume that anything more intelligent than us would be just as ruthless, petty, and destructive.

I don't know that an AI would necessarily have to be ruthless or petty to judge humanity as worthy of destruction. The machine's view of us in The Matrix as little more than a rampant destructive virus is hard to counter in a lot of ways.

Link to comment
Share on other sites

Surely the insanely powerful chess-playing AI would have considered 10^120 all possible games, and as such doesn't need extra protons for further analysis?

I’m not sure at all. Even if it could do 1 game per nanosecond, the age of the universe wouldn’t suffice. Also, not enough protons in the universe to store the results. (Assuming you can store each result in one proton, which is hard to believe.)

So, no, I think your protons, and paddington’s, will be first in line for the chess playing engine.

Link to comment
Share on other sites

With storage, it wouldn't be one proton per game. You'd split up the games into particular types of position, solve each of the games within that type, and then store the results of a particular type of position (e.g. forced win for white/black/draw).



As for speed, we are dealing with an insane AI here.


Link to comment
Share on other sites

Yeah, my storage argument is terrible. Sorry.



But the time argument? I think that holds water. 10^120 is a big number. (I’m not saying chess can’t be solved. I’m saying that “every finite problem can be solved by a fast enough computer” is a fallacy.)


Link to comment
Share on other sites

Let's see, 3000 computers going through all possible games at one nanosecond a game gets you roughly 10^100 years, which means our AI has a shot at solving chess before the last black holes in the universe evapourate (assuming proton decay).



Increasing the number of computers so that everyone on Earth has a chess-solving computer only brings it down to 4.5*10^93 years. Then supposing our AI speeds up to one game per one planck unit, it'd still take 2.7*10^61 years.



Clearly a brute-force method is impractical. Our redoubtable AI will thus have to adopt a more efficient method, by being programmed to recognise certain forced positions (King and Queen v. King) as they happen, and then stopping the analysis for that type.


Link to comment
Share on other sites

I like how we assume that anything more intelligent than us would be just as ruthless, petty, and destructive.

I like how we assume it could actually kill us. All the really destructive shit requires humans, drones don't refuel and rearm themselves and nukes can't be fired without human intervention. So really the fuck is it gonna do? Shut off cable?

Link to comment
Share on other sites

Would a created intellegence (something we build) have to be a computer. Could we develop AI biologically?

IDK, I saw a movie about this one time, it was called The Elevator, it wasn't a bad movie but it seems highly unlikely.

Link to comment
Share on other sites

Let's see, 3000 computers going through all possible games at one nanosecond a game gets you roughly 10^100 years

Another way to do the computation:

We have 10^26 nanoseconds since the Big Bang. Assume a machine that can check a position per nanosecond. So you’d need 10^100 machines to check chess exhaustively. There are 10^80 protons in the universe, so you need to build 10^20 machines from a single proton. This assumes computation time = age of the universe.

Link to comment
Share on other sites

Would a created intellegence (something we build) have to be a computer. Could we develop AI biologically?

Yes. It’s called sex. It’s very pleasant.

(This answer is not facetious. It’s exactly what makes this fun: where do our biological constructs cease to be human? Is your child with cool bionic implants still human?)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...