Jump to content

The Ethics of Artificial Intelligence


Mlle. Zabzie

Recommended Posts

50 minutes ago, Mlle. Zabzie said:

Well, what do you think?  Do you think we have Skynet in our future?  If so, what should we do about it, if anything?  And then there are all those interesting questions that Sci Fi loves to play with regarding the ethical implications of self-aware AI.  Someone up thread did, however, correct me somewhat and I accept the correction - what I'm focused on now has a lot to do with big data and self-predicting AI.  It's not the Terminator....yet.  Thoughts?

It doesn't need to be self aware or "conscious" (like in all these recent sci-fi shows), that's fantasy

It just needs to be programmed and set free.

Imagine a tiny drone sitting in the middle of a field somewhere (hard to locate). It is programmed to accept bitcoin in exchange for performing surveillance or violence. It uses that bitcoin to purchase electricity (perhaps from another nonhuman entity) to recharge itself.

Imagine a self driving car that is programmed to accept bitcoin in exchange for transport. It uses that bitcoin to refuel and repair (or purchase more self driving cars!). 

All of that technology exists today. 

Link to comment
Share on other sites

5 minutes ago, Commodore said:

It doesn't need to be self aware or "conscious" (like in all these recent sci-fi shows), that's fantasy

It just needs to be programmed and set free.

Imagine a tiny drone sitting in the middle of a field somewhere (hard to locate). It is programmed to accept bitcoin in exchange for performing surveillance or violence. It uses that bitcoin to purchase electricity (perhaps from another nonhuman entity) to recharge itself.

Imagine a self driving car that is programmed to accept bitcoin in exchange for transport. It uses that bitcoin to refuel and repair (or purchase more self driving cars!). 

All of that technology exists today. 

Agreed - so, what ethical rules should the technology obey?  What regulations and consequences should there be to enforce those rules?

Link to comment
Share on other sites

15 minutes ago, Mlle. Zabzie said:

Agreed - so, what ethical rules should the technology obey?  What regulations and consequences should there be to enforce those rules?

Technology exists outside of the high-minded rules and prohibitions placed on it. Banning guns doesn't make them disappear, it just ensures only criminals will use them (with advances in 3D printing, enforcing gun bans will soon be laughable). Likewise, very few (and soon not any) countries have the ability to deny their citizens Internet access. 

That may be good or bad, but it's the reality. 

The response should be to anticipate, evolve, adapt, and create effective technological countermeasures. Not words on paper.  

Don't argue, build.

 

Link to comment
Share on other sites

11 hours ago, Mlle. Zabzie said:

Agreed - so, what ethical rules should the technology obey?  What regulations and consequences should there be to enforce those rules?

That is the crux, but (and now comes my nontrivial point) it’s the crux of all systems of governance. You can replace “technology” with any “the state” and you have the exact same question for the exact same reason. Even speaking as a technologist, I have to point out that these questions are questions of political philosophy, not of artificial intelligence.

So what is the answer? This depends. If you are with Plato’s The Republic, you will arrive at some (totalitarian) answer. If instead you took the Popper-pill, you arrive at a different answer. If you are an anarchist (in which I include libertarians), you arrive at a different answer still.

My answer (because Popper) is that the universality of the rule of law Holy. This entails outcome differentials, and these differential will track group membership. (In particular, society will be biased when viewed from the perspective of outcomes.) Outcome differentials is (to me) the inevitable consequence of fairness: the paradox of liberty is exactly that in a fair society, all the variation will be explained by variables that society does not control.

So it’s a deep question. Algorithms make this deep question very visible and operationalise it.  

Link to comment
Share on other sites

17 hours ago, Commodore said:

Technology exists outside of the high-minded rules and prohibitions placed on it. Banning guns doesn't make them disappear, it just ensures only criminals will use them (with advances in 3D printing, enforcing gun bans will soon be laughable). Likewise, very few (and soon not any) countries have the ability to deny their citizens Internet access. 

That may be good or bad, but it's the reality. 

The response should be to anticipate, evolve, adapt, and create effective technological countermeasures. Not words on paper.  

Don't argue, build.

 

That's reductive.  There are all kinds of written and unwritten rules and norms that govern our behavior.  And the discussion here, I believe, is exactly what those rules and norms should be for the behavior AI and AI programmers.  Will those rules get broken?  Yes, of course, they will, but what sorts of consequences should there be for breaking those rules?  

6 hours ago, Happy Ent said:

That is the crux, but (and now comes my nontrivial point) it’s the crux of all systems of governance. You can replace “technology” with any “the state” and you have the exact same question for the exact same reason. Even speaking as a technologist, I have to point out that these questions are questions of political philosophy, not of artificial intelligence.

So what is the answer? This depends. If you are with Plato’s The Republic, you will arrive at some (totalitarian) answer. If instead you took the Popper-pill, you arrive at a different answer. If you are an anarchist (in which I include libertarians), you arrive at a different answer still.

My answer (because Popper) is that the universality of the rule of law Holy. This entails outcome differentials, and these differential will track group membership. (In particular, society will be biased when viewed from the perspective of outcomes.) Outcome differentials is (to me) the inevitable consequence of fairness: the paradox of liberty is exactly that in a fair society, all the variation will be explained by variables that society does not control.

So it’s a deep question. Algorithms make this deep question very visible and operationalise it.  

It is in fact, a deep and philosophical question and I think it is interesting precisely because of the way algorithms make it visible and operational.  For the record - I'm not much of a Platonist.  Thanks for the perspective - is interesting to get someone who actually knows what they are talking about on the technology side in the mix.

Link to comment
Share on other sites

@Mlle. Zabzie I posted this in US Politics yesterday, but I'm also going to throw it in here. This is a piece on the philosophy underpinning some of the neo-reactionary alt-right, but one of the core components of it is that artificial super intelligence (ASI) is inevitable and will herald the end of the human race.  While the slant of that piece itself may not be precisely appropriate for this topic, it should at least give further reading on certain philosophical responses to these developments. I suspect that whether he has read it or not, our resident bitcoin and uber enthusiast has at least been influenced by these thoughts for example.

I'm not familiar with Popper (which I may need to fix) so unsure whether I'm more going to wind up in the same camp as Happy Ent or if I'm a totalitarianist from his framing. I certainly wouldn't say I'm in lock step with Plato, but I'm definitely not an anarchist.

Link to comment
Share on other sites

6 hours ago, karaddin said:

@Mlle. ZabzieI'm not familiar with Popper (which I may need to fix) so unsure whether I'm more going to wind up in the same camp as Happy Ent or if I'm a totalitarianist from his framing. I certainly wouldn't say I'm in lock step with Plato, but I'm definitely not an anarchist.

The Open Society and its Enemies, Popper’s great analysis of authoritarianism. (Popper is mostly known for his theory of science, but his political philosophy is much more important, I think.) I finally read the book (having read a lot of secondary stuff about it over the years) and found it incredibly clear and easy to read. Highly recommended.

Link to comment
Share on other sites

6 hours ago, Kalbear said:

That's a great, spooky article.

Still… the Alt-Right is now based on a pillar of “religious traditionalism?” Last time I checked they were atheists!

It’s a fine piece that accurately describes some trends in internet culture, and which are good to know about. But as a description of the Alt-Right (whatever that means) it not only fails, but it seems to fail deliberately. The atheism (and general rejection of conservative social values), as well as the principled defence of basic civil liberties around information (privacy and anonymity) are very, very important trends in the Alt-Right as well, and even a cursory glance at “the movement” makes that clear (and is well reported, so hard to miss for a serious writer).

Still, this article, together with the “original” Breitbart article (an-establishment-conservatives-guide-to-the-alt-right (Mar 2016)) form a decent picture. At least from where I’m standing.

Link to comment
Share on other sites

This is a very interesting topic and one which will play a huge role in shaping tomorrow's society. This is somewhat off topic but here goes:

What's interesting is the position a lot of big names in science/IT have taken in regards to the future of AI http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/

As of now we have not had to contend with dominating AI because it is still not sufficiently advanced to break free of human command. We can still overwrite any faulty algorithm and kill off the technology if it becomes troublesome like Microsoft's AI. https://en.wikipedia.org/wiki/Tay_(bot)

The problem will definitely grow if we ever get to a stage where the AI's algorithm is so advanced that it will essentially become an entirely independent unit. I think it's entirely possible that such a system will view humans as a form of virus and try to eradicate them to cleanse the planet. 

I think this could be especially troublesome for cybernetics. If any part were to go rogue, it would seriously impair the individual in question. 

Link to comment
Share on other sites

1 hour ago, House Balstroko said:

What's interesting is the position a lot of big names in science/IT have taken in regards to the future of AI http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/

I hasten to add that of these three, only Gates is a computer scientist (without a Ph.D., but clearly competent—I know his advisor, by the way.)

It’s fair to say that fears of superintelligence (a particular AI scenario) are most firmly rooted among thinkers outside of Computer Science. Philosophers, mathematicians, physicists, etc. In contrast, CS people (like me) mostly have a different view: I consider General AI to be a very difficult problem and see no reason at all for believing that we will ever solve it. 

However, Stuart Russell, a more notable voice than Musk and Hawking, is in complete disagreement with me and was recently interviewed on Sam Harris’s podcast: https://www.samharris.org/podcast/item/the-dawn-of-artificial-intelligence1 .  His must be the most trustworthy voice pro imminent robot apocalypse.

For the record, although it should be clear from this thread: I believe that the problems by stupid (non-general) AI (which is just “algorithms”) are very real; no appeal to Skynet is necessary for worrying about AI.

Link to comment
Share on other sites

12 hours ago, karaddin said:

I suspect that whether he has read it or not, our resident bitcoin and uber enthusiast has at least been influenced by these thoughts for example.

I have not. A weak attempt at associating their motives/worldview with mine without providing evidence. 

Quote

That's reductive.  There are all kinds of written and unwritten rules and norms that govern our behavior.  And the discussion here, I believe, is exactly what those rules and norms should be for the behavior AI and AI programmers.  Will those rules get broken?  Yes, of course, they will, but what sorts of consequences should there be for breaking those rules?  

Rules that cannot be enforced (prohibition, gun control, speech controls) undermine authority. 

A recent example of this is BitTorrent, a communications protocol that allows for uncensorable file sharing. No corporation or government has figured out a way to stop it (short of cutting off internet access entirely), because there is no BitTorrent CEO to arrest, no BitTorrent server to unplug, it's simply a language used by a distributed network. Bitcoin follows the same model (except with value instead of merely information). And Internet access in general is becoming impossible to limit (not there quite yet). 

Too much time/energy arguing about the way things should be, rather than finding ways to adapt to the way things are and will be. 

Understand that unethical technology or AI (however that is defined) doesn't need permission from any authority/majority to be created. 

Link to comment
Share on other sites

2 minutes ago, Commodore said:

I have not. A weak attempt at associating their motives/worldview with mine without providing evidence. 

Rules that cannot be enforced (prohibition, gun control, speech controls) undermine authority. 

A recent example of this is BitTorrent, a communications protocol that allows for uncensorable file sharing. No corporation or government has figured out a way to stop it (short of cutting off internet access entirely), because there is no BitTorrent CEO to arrest, no BitTorrent server to unplug, it's simply a language used by a distributed network. Bitcoin follows the same model (except with value instead of merely information). And Internet access in general is becoming impossible to limit (not there quite yet). 

Too much time/energy arguing about the way things should be, rather than finding ways to adapt to the way things are and will be. 

Understand that unethical technology or AI (however that is defined) doesn't need permission from any authority/majority to be created. 

I disagree.  Let's take speech.  Right now, we exist in a country where there are not governmental restrictions on speech (generally speaking), but there are private limitations (e.g., codes of conduct enforced by websites and employers that come with consequences) as well as social mores which also provide powerful limitations.  And, we should consider the ethical underpinnings of those restrictions in each case.

I disagree with you on gun control, very powerfully, but that's not this thread.  If you want that discussion, open a thread.

There will always be bad actors in whatever construct we humans build.  But as social animals we have ways of enforcing our norms whether through authoritarian control or through more subtle social measures.  This technology is new, but norms, official and unofficial will develop around it.  E.g., take the telephone,  Norms built up right away including how to greet each other (the ubiquitous "hello") to when it is appropriate to call.  Some have remained as the technology changes, others have not.  Therefore, I think it is, in fact, a fabulous idea to think as much as possible up front as to what those norms should be, what enforcement, if any, should look like, so that as the technology adapts, there is a basis to adapt from.  A nihilistic, anarchist approach is exactly what we shouldn't have - there will be nothing to adapt then.

Link to comment
Share on other sites

Commodore if I wanted to insult you I've got no reluctance to do so, nor do I think any mild association from a comment like that could function to sully your reputation here. I just find your enthusiasm for bitcoin amusing, and that component of the piece made me think of you.

But I'm sceptical you can claim definitively that you haven't been influenced by any of this without knowing about it, I wouldn't have such confidence speaking to myself because influence can be very subtle even when you disagree with it.

Happy Ent - agreed on the alt right not being solely explained by that, it's a more complicated mix and I tried to make that clear by saying "some" but that didn't really cut it.

Link to comment
Share on other sites

7 hours ago, Happy Ent said:

Still… the Alt-Right is now based on a pillar of “religious traditionalism?” Last time I checked they were atheists!

It’s a fine piece that accurately describes some trends in internet culture, and which are good to know about. But as a description of the Alt-Right (whatever that means) it not only fails, but it seems to fail deliberately. The atheism (and general rejection of conservative social values), as well as the principled defence of basic civil liberties around information (privacy and anonymity) are very, very important trends in the Alt-Right as well, and even a cursory glance at “the movement” makes that clear (and is well reported, so hard to miss for a serious writer).

Still, this article, together with the “original” Breitbart article (an-establishment-conservatives-guide-to-the-alt-right (Mar 2016)) form a decent picture. At least from where I’m standing.

Oh, sorry - I wasn't assuming this was about the alt-right. I don't think it is in any meaningful way, any more than Kurzweil represents Democratic socialists. I thought this was more representative of a specific person in that area as well as an interesting take on a future where technological development and its social implications far outpaces the ability of democracy to deal with it. That I thought was a very good take relative to what @Commodore was saying or implying. A good common example is the very real problem of DDoS-style cyber bullying. This is a massive disruption on people's very real lives right now, impacting their careers and the way they work - and there are no reasonable laws or even an ability to prosecute. 

Similarly, this year we had a very large amount of evidence that an autocratic state performed theft to influence the election of a democratic state. And as far as I can tell there's very little to do about it. The US cannot respond in kind to an autocratic government, they appear to be doing nothing and have so far taken no actual actions against it. 

Link to comment
Share on other sites

1 minute ago, Kalbear said:

A good common example is the very real problem of DDoS-style cyber bullying. This is a massive disruption on people's very real lives right now, impacting their careers and the way they work - and there are no reasonable laws or even an ability to prosecute. 

There are however, technological countermeasures.

Link to comment
Share on other sites

7 minutes ago, Commodore said:

There are however, technological countermeasures.

No, there aren't. You can't do that on Twitter. You can't do that on facebook. That definitely doesn't work on IoT-based attacks or phone based attacks. All of these fail in that general way because the most important thing for people to do on their computer is not waste time. A website loading a few seconds more means people have already moved on. If a tweet took multiple seconds to post, no one would do them. And you can't demand that the Nest thermostat spend its very valuable computing time to register a request. 

It works for bitcoin because bitcoin does not as a rule process massive amounts of transactions at a centralized location, and if a transaction takes a few seconds more it's not the end of the world. It completely fails otherwise.

Link to comment
Share on other sites

1 hour ago, Commodore said:

I have not. A weak attempt at associating their motives/worldview with mine without providing evidence. 

Rules that cannot be enforced (prohibition, gun control, speech controls) undermine authority. 

A recent example of this is BitTorrent, a communications protocol that allows for uncensorable file sharing. No corporation or government has figured out a way to stop it (short of cutting off internet access entirely), because there is no BitTorrent CEO to arrest, no BitTorrent server to unplug, it's simply a language used by a distributed network. Bitcoin follows the same model (except with value instead of merely information). And Internet access in general is becoming impossible to limit (not there quite yet). 

Yeah, this isn't remotely true. First you can kill anyone's connection that shows they're streaming bittorrent one way or another. This is a fairly trivial thing to do on any competent network. You can inspect packets for the bittorrent headers and kill or delete them. You can spam the network with bullshit. These things don't work because we have a lot of laws around them and bittorrent itself is not specifically ruled as illegal - but governments can (and have) blocked it successfully. Iran and North Korea have both censored it happily. China could in a heartbeat if they wanted.

Internet access is in general quite easy to censor. Perhaps that will change somewhat with drone-based wi-fi systems, but good luck flying a drone in China that they haven't approved of. 

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...