Jump to content

Are "Superintelligent" Computers an Existential Threat to Our Future?


Ramsay Gimp

Recommended Posts





Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.




=-=-=







Decay in the belief in self is driven not by technology, but by the culture of technologists, especially the recent designs of antihuman software like Facebook, which almost everyone is suddenly living their lives through. Such designs suggest that information is a free-standing substance, independent of human experience or perspective. As a result, the role of each human shifts from being a "special" entity to being a component of an emerging global computer.



This shift has palpable consequences. For one thing, power accrues to the proprietors of the central nodes on the global computer. There are various types of central nodes, including the servers of Silicon Valley companies devoted to searching or social-networking, computers that empower impenetrable high finance (like hedge funds and high-frequency trading), and state-security computers. Those who are not themselves close to a central node find their own cognition gradually turning into a commodity. Someone who used to be able to sell commercial illustrations now must give them away, for instance, so that a third party can make money from advertising. Students turn to Wikipedia, and often don't notice that the acceptance of a single, collective version of reality has the effect of eroding their personhood.



Link to comment
Share on other sites

The topic is not about the singularity, except insofar as some humanist trolls try to make it be :sci:



A synthetic mind would be a existential threat even if it had to be raised like a normal child and was no smarter than the average human, if it can keep some of the properties of programs, especially copyability. Not to mention it would be made, you know, for a purpose, like a tool. Not exactly the most promising scenario.


Link to comment
Share on other sites

Wouldn't the ET AI been here since ages ago (or were they?) These discussions may be interesting philosophically, but I think the tech utopians fantasies distract us from the very real problems that are about us (or will be in the next decades). We are already running out of fuel for our industrial civilization, we, especially coastal regions might face devastating changes due to climate change, so we should rather think about those down to earth problems than fantasizing about space invaders, the singularity, evil AI etc. The evil AI of our days may be the tools selling/buying stock within microseconds, serving no real economical purpose but making a few rich people richer (and by the way destabilizing more vital parts of the economy). It is simply too late to hope for some mastermind AI to get us out of this fix. Sorry for the OT, I do not want to keep those interested from their discussions.


Link to comment
Share on other sites

Piss off druid



Starts talking about singularity, which already is a wild fantasy born out of more realistic goals, then starts talking about another wild fantasy, to sprinkle a bit of ridicule, interstellar colonization of backwaters. Then starts talking about how there are 'more important things' in the topic about other stuff. No really, piss off.



Maybe you'd like to adopt the hunter gatherer/nomad lifestyle? I hear it's all the rage among the sustainability elect humanists. You can start by bartering your computer away for a goat. Maybe the resulting civilization of your particular end of history will be as ethical and harmonious as afghani hillbillies.


Link to comment
Share on other sites

Well, the Three Laws of Robotics by Isaac Asimov were kind of made to make sure something like this doesn't happen. Here are what they are for those who don't know.



Never harm a human



Always do what a human says, unless if conflicts with the first demand



Always try to prevent yourself from destruction, unless it conflicts with the first demand


Link to comment
Share on other sites

Well, the Three Laws of Robotics by Isaac Asimov were kind of made to make sure something like this doesn't happen. Here are what they are for those who don't know.

Never harm a human

Always do what a human says, unless if conflicts with the first demand

Always try to prevent yourself from destruction, unless it conflicts with the first demand

Any civil rights lawyer worth their salt would start a suit on behalf of the A.I.s.

Link to comment
Share on other sites

What about the existential threat of not developing strong A.I.? Because we'll be sitting ducks for extraterrestrial strong A.I. if we don't create our own silicon overlords first.

:stillsick: .....but...no...fuck. fuckity fucking fuck.

But what if, as the free RPG book Eclipse Phase posits, our A.I.s run amok and end up as Earth's ambassadors to the stars?

I also heard that the ten commandments include a law that we shall not kill. That solves that problem.

:laugh:

:bowdown:

The topic is not about the singularity, except insofar as some humanist trolls try to make it be :sci:

Hahaha. "Humanist Troll" might be a good custom title, though "11th Little Indian" has the lighthearted racial humor so popular with the kids these days...

ANYway, the thread title indicates the topic is about A.I. as a threat to our future. While the question of programs as conscious entities is simply incoherent*, the question how our culture will deal with increased automation and the mind-set of Singularity adherents seems to be on topic?

*See Is the Brain a Digital Computer?

Link to comment
Share on other sites

  • 1 month later...

The Most Terrifying Thought Experiment of All Time: Why are techno-futurists so freaked out by Roko’s Basilisk?

Slender Man. Smile Dog. Goatse. These are some of the urban legends spawned by the Internet. Yet none is as all-powerful and threatening as Roko’s Basilisk. For Roko’s Basilisk is an evil, godlike form of artificial intelligence, so dangerous that if you see it, or even think about it too hard, you will spend the rest of eternity screaming in its torture chamber. It's like the videotape in The Ring. Even death is no escape, for if you die, Roko’s Basilisk will resurrect you and begin the torture again.

Roko’s Basilisk exists at the horizon where philosophical thought experiment blurs into urban legend. The Basilisk made its first appearance on the discussion board LessWrong, a gathering point for highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality
Link to comment
Share on other sites

Was about to post that. It's not scary to anyone who separates themselves from a simulation of themselves.

Besides if there was an AI that was evil enough to have you tortured for all eternity for something as arbitrary as that, it doesn't make sense to believe that it would actually spare those that would help it either, regardless of what it might claim in the deal. It's not like it has to be honest. So you'd probably be screwed either way in that case.

Link to comment
Share on other sites

The Atlantic seems to think so...

http://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/

The article also name-drops and links Stephen Hawking and a number of other accomplished scientists who warn about the alleged dangers of smart computers.

I'm admittedly not very knowledgeable about the capabilities of computers or AI at the moment, but this article appealed to the sci-fi lover in me. Is this mostly alarmism, or is there a real cause for worry here?

The most serious threat to our future is over-usage of the planet's resources. After we've solved that, I think virtual realities have a tendency to **** up humanity.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...