Jump to content

The Ethics of Artificial Intelligence


Mlle. Zabzie

Recommended Posts

Just now, Mlle. Zabzie said:

Why shouldn't there be any external bidirectional interfaces?  Curious.

Too easy to hack or disrupt. It's like saying that you want the ability to have people rely on what they see based on what other people see. It makes a complicated problem into an incredibly painful problem. 

Just now, Mlle. Zabzie said:

This is probably right - but the civil liberties implications of this sort of thing are fairly astounding to me.  There would be a tendency for a kind of broken windows policing potentially.

That's possible. Again, as long as there is no actual violation of civil liberties, I'd be okay with it - but I'm okay with a lot more oversight in that specifically first. That said, fraud tends to be a significantly less murky case than prediction of criminal acts.

Just now, Mlle. Zabzie said:

I'm more worried about it taking the next step.  It could very well be used, in the near future as people get more comfortable with it, to do just those things.  Just because data exist doesn't mean the interpretation are true, but most people miss this.

Right, but that doesn't mean having the data is wrong. It means having safeguards and laws about how the data is used is the right way to go. I am in general in favor of anything that provides more information, and also in favor of checking that information with laws restricting its use. 

Link to comment
Share on other sites

6 hours ago, Mlle. Zabzie said:

AI is here.  It's all over our daily lives in ways we don't necessarily understand, and will be even more so in ways we cannot anticipate.  This raises interesting ethical implications.

 

First:  self-driving cars.

2.  How should self driving cars prioritize the lives of pedestrians/other cars vis a vis its passengers?

 

2. is the only one I've generally been set on: the life of the passenger should have a strong bias in its favor.

I wouldn't want to go to a doctor for my needs and have him do some utilitarian horror shit on me, I wouldn't want to drive in a car that as willing to throw me over I, Robot style for a slightly higher chance of saving someone else.

Link to comment
Share on other sites

23 minutes ago, Castel said:

2. is the only one I've generally been set on: the life of the passenger should have a strong bias in its favor.

I wouldn't want to go to a doctor for my needs and have him do some utilitarian horror shit on me, I wouldn't want to drive in a car that as willing to throw me over I, Robot style for a slightly higher chance of saving someone else.

That is human nature in any event.  In fact, when you drive you will prioritize your own safety over your passengers' - so it would simply codify existing behavior and what would happen in an override scenario in any event IMO.

Link to comment
Share on other sites

Zabzie, a book that you might want to check out is Weapons of Math Destruction by Cathy O’Neil, who is competent about the subject matter and might share many of your societal concerns.

(Caveat: I would avoid the term AI, because it carries too much baggage about agency. “Algorithmic decision making” or something like that is a better term, which leads to clearer thinking about the problematic impact. I care about these issues a lot (I’m an algorithms professor).)

If you’re hesitant about investing the time to read a whole book, O’Neil was interviewed on the Econ Talk podcast, which I found worth listening to (while commuting or cleaning the house, say): Econ Talk podcast with O’Neil

Link to comment
Share on other sites

The unpredictable parts of AI are already being seen, because it's the process of automation in manufacturing. As we've seen since the beginning of the Industrial Revolution, mass automation brings huge poverty to manufacturing regions.

Self-driving cars will raise the poverty rate due to the number of people now out of a job. And they won't get new jobs elsewhere, because that hasn't happened at any point in human history; businesses are reducing their working base, not expanding them - unless it's as a last resort.

I'd say that even though companies know that their workers are also their consumers, at this stage there are enough "emerging markets" such as India and China with growing middle classes that they can continue to lay off workers with AI machines.

In the long term, though, this will lead to overall reduction in quality of life. Part time jobs are much more common than full time, wages are low and AI is to blame. The big ethical dilemma that humanity hasn't solved: just because we can automate tasks to remove humans from it - should we be?

AI is even used in counselling and mediation in law firms - jobs you'd never have thought AI could do.

Link to comment
Share on other sites

I'd say that in regards to law enforcement and terrorism prevention then using some sort of AI to simply flag items for further investigation is nothing we are not doing already. All you're doing is replacing say 50 human technicians with one computer program that will sift through data for patterns or to look for key words etc.

That sort of thing is probably already happening, and provided you're not giving the AI the keys to an F35 to bomb someone or the power to execute presumed criminals Judge Dredd style then it should present no issue. I think using the aI to supplement a process but still have humans make the final call would be the way forwards for this sort of thing.

The self driving car is another issue entirely though. I remember seeing on TV once someone talked about what would happen if your car is driving along and a child steps into the road, but the only course of action is to swerve into oncoming traffic. How does the AI decide whose life is more important? Does it kill you to save the child? Or does it take you the owner to be higher priority and keep you safe by killing the pedestrian?
I guess it's a bit spooky to think that the AI would just use cold hard logic to solve that problem, whereas a human could be swayed emotionally. Maybe the AI's way of thinking is best, but its a pretty harsh thing to come to terms with from a human perspective.

As for automating jobs, i guess it can go either way can't it?
I mean more automation mean less jobs, but potentially also more people doing less work and spending time with families or leading more healthy lifestyles? I mean less people needing to work in mines to get raw materials mean less injuries and less long term health problems. So that takes the load off healthcare in the future.

I guess really long term you're aiming for some sort of utopia where the machines do all the nasty stuff and we just kick back and live luxurious lives, but the in between part is the bit people worry about. May it's all a pipe-dream anyway?

Link to comment
Share on other sites

1 hour ago, Happy Ent said:

Zabzie, a book that you might want to check out is Weapons of Math Destruction by Cathy O’Neil, who is competent about the subject matter and might share many of your societal concerns.

Weapons of Math Destruction is an awesome book. You might also want to check out Data and Society's work on this topic (or any of their other research for that matter, it's all pretty fascinating). Share Lab and ProPublica's Julia Angwin have done a ton of interesting work on bias in algorithms. It basically comes down to Kranzberg's first law of technology, 'technology is neither good nor bad, nor is it neutral'. Algorithms are only as impartial as the people who write them, the people who use them and the data which is fed into them. It's a case of bias in, bias out.

I think there's also a problem in some cases (as is so often the case when working with data) with confusion about what algorithms actually do - for example, an algorithm based in historical crime data might predict that black people are more likely to be charged with drug offences, but that shouldn't be conflated with concluding that black people are necessarily more likely to commit drug offences. What it reflects is as much about patterns of over-policing black communities and racism in the justice system as it is about offending. In some cases you get the sense that people don't really understand what questions they're asking the algorithm to solve, and therefore don't really understand the answers they're getting either.

Link to comment
Share on other sites

20 minutes ago, Lordsteve666 said:

 

The self driving car is another issue entirely though. I remember seeing on TV once someone talked about what would happen if your car is driving along and a child steps into the road, but the only course of action is to swerve into oncoming traffic. How does the AI decide whose life is more important? Does it kill you to save the child? Or does it take you the owner to be higher priority and keep you safe by killing the pedestrian?
I guess it's a bit spooky to think that the AI would just use cold hard logic to solve that problem, whereas a human could be swayed emotionally. Maybe the AI's way of thinking is best, but its a pretty harsh thing to come to terms with from a human perspective.

This is why I like the idea of self driving cars that only work on interstates. Driving for 15 minutes through your neighborhood with kids everywhere? You have to drive yourself. Going on a cross country trip of 100s of miles? Initiate self driving with the person only taking over when getting of the interstate. Not saying a kid couldn't walk on to an interstate, but it seems this may lower the chance of this situation occurring. 

Link to comment
Share on other sites

12 minutes ago, A True Kaniggit said:

This is why I like the idea of self driving cars that only work on interstates. Driving for 15 minutes through your neighborhood with kids everywhere? You have to drive yourself. Going on a cross country trip of 100s of miles? Initiate self driving with the person only taking over when getting of the interstate. Not saying a kid couldn't walk on to an interstate, but it seems this may lower the chance of this situation occurring. 

Yeah it's a tricky one. I mean the one area self drive is being pushed quite a bit is with delivery vehicles like trucks and vans. These are mostly used in built up areas except for the long distance sort of of trucking on interstates etc. So these vehicles will come into contact with pedestrians or the unexpected things that happen in built up areas more often.

I'm guessing any vehicle would need some sort of map system where it can only operate on its own on specific roads? And maybe a real-time system where self drive is blocked in certain situations (like icy weather, roadworks, near accidents).

Link to comment
Share on other sites

6 hours ago, Happy Ent said:

Zabzie, a book that you might want to check out is Weapons of Math Destruction by Cathy O’Neil, who is competent about the subject matter and might share many of your societal concerns.

(Caveat: I would avoid the term AI, because it carries too much baggage about agency. “Algorithmic decision making” or something like that is a better term, which leads to clearer thinking about the problematic impact. I care about these issues a lot (I’m an algorithms professor).)

If you’re hesitant about investing the time to read a whole book, O’Neil was interviewed on the Econ Talk podcast, which I found worth listening to (while commuting or cleaning the house, say): Econ Talk podcast with O’Neil

Thanks!  Will definitely check it out.

5 hours ago, Yukle said:

The unpredictable parts of AI are already being seen, because it's the process of automation in manufacturing. As we've seen since the beginning of the Industrial Revolution, mass automation brings huge poverty to manufacturing regions.

Self-driving cars will raise the poverty rate due to the number of people now out of a job. And they won't get new jobs elsewhere, because that hasn't happened at any point in human history; businesses are reducing their working base, not expanding them - unless it's as a last resort.

I'd say that even though companies know that their workers are also their consumers, at this stage there are enough "emerging markets" such as India and China with growing middle classes that they can continue to lay off workers with AI machines.

In the long term, though, this will lead to overall reduction in quality of life. Part time jobs are much more common than full time, wages are low and AI is to blame. The big ethical dilemma that humanity hasn't solved: just because we can automate tasks to remove humans from it - should we be?

AI is even used in counselling and mediation in law firms - jobs you'd never have thought AI could do.

Actually, my own prediction is not that quality of life will go down in the long run, but rather that we as a society will start to value different things and so some goods/services/whatever, that are not currently valued or may not even exist will be what is produced.  It will suck, as always for the generations in transition.

4 hours ago, Arkhangel said:

Weapons of Math Destruction is an awesome book. You might also want to check out Data and Society's work on this topic (or any of their other research for that matter, it's all pretty fascinating). Share Lab and ProPublica's Julia Angwin have done a ton of interesting work on bias in algorithms. It basically comes down to Kranzberg's first law of technology, 'technology is neither good nor bad, nor is it neutral'. Algorithms are only as impartial as the people who write them, the people who use them and the data which is fed into them. It's a case of bias in, bias out.

I think there's also a problem in some cases (as is so often the case when working with data) with confusion about what algorithms actually do - for example, an algorithm based in historical crime data might predict that black people are more likely to be charged with drug offences, but that shouldn't be conflated with concluding that black people are necessarily more likely to commit drug offences. What it reflects is as much about patterns of over-policing black communities and racism in the justice system as it is about offending. In some cases you get the sense that people don't really understand what questions they're asking the algorithm to solve, and therefore don't really understand the answers they're getting either.

Thanks - agree with this in general.  Really think our problems will come from biased algorithms in the first place.  I'm not concerned with the data in and of itself, but rather about the USE of the data and the plausible deniability that "facts" give to humans. 

Link to comment
Share on other sites

36 minutes ago, Mlle. Zabzie said:

Really think our problems will come from biased algorithms in the first place.  I'm not concerned with the data in and of itself, but rather about the USE of the data and the plausible deniability that "facts" give to humans. 

Yes. 

The data is going to be skewed to start with because of current issues affecting crime statistics. Things like maybe certain parts of society don't report certain crimes, and things like people assuming crime is more likely in poorer areas without proof to match that assumption up with what's going on on the ground.
So the algorithm is probably going to be coded in such a way it unintentionally targets specific groups, it literally doesn't know any better. You feed it garbage data and it'll make garbage predictions.

The second issue is how the humans handle the output. Do they take it all at face value and just round up the targets given to them? Or do they ignore what they see is obviously bias? How will they even recognize if the AI is being biased or giving false positives?
What happens if the algorithm works well but the human reading it just doesn't like the results because it goes against what they think is really going on?

I'm sure some of this stuff does work though. I remember seeing a TV documentary where they followed these police officers in LA i think and they were using some computer algorithm to predic where crimes were likely, using "big data" gathered from all sorts of sources. It actually worked and even the beat cops were astounded that it actually helped them out and got good results. Link
It's nt quite the same as using a full AI but it's the same sort of direction. At the end of the day the guys on the ground were still there to make sure it worked properly and they knew when things were making sense to them.

Link to comment
Share on other sites

2 hours ago, Lordsteve666 said:

I'm sure some of this stuff does work though. I remember seeing a TV documentary where they followed these police officers in LA i think and they were using some computer algorithm to predic where crimes were likely, using "big data" gathered from all sorts of sources. It actually worked and even the beat cops were astounded that it actually helped them out and got good results. Link
It's nt quite the same as using a full AI but it's the same sort of direction. At the end of the day the guys on the ground were still there to make sure it worked properly and they knew when things were making sense to them.

To be really clear, this is EXACTLY the same as using a full AI. This is what artificial intelligence is. Similar to @Happy Ent I'd be cautious of thinking of AI as Hal or something like that. We are a long ways away from that (or possibly not, but that's immaterial) and it doesn't matter, because a computer doesn't have to be smart like us in order to be exceptionally useful and dangerous. 

Link to comment
Share on other sites

8 hours ago, Lordsteve666 said:

The self driving car is another issue entirely though. I remember seeing on TV once someone talked about what would happen if your car is driving along and a child steps into the road, but the only course of action is to swerve into oncoming traffic. How does the AI decide whose life is more important? Does it kill you to save the child? Or does it take you the owner to be higher priority and keep you safe by killing the pedestrian?

An AI can slam on the brakes faster than a human driver, increasing the chances of the kid surviving the impact, and it could probably do a more controlled swerve around an obstacle while avoiding oncoming traffic. Intentionally causing an unavoidable head-on collision to avoid hitting a pedestrian would be stupid - even if you prioritise kids over adults the oncoming car might be full of babies, and you can't predict what damage other cars swerving to avoid the crash would do. The AI rules should boil down to "try not to hit anything, and if it's unavoidable, hit whatever will do the least damage". Self-driving cars will kill kids at some point - they'll step out in front of cars with no time to swerve even with perfect reflexes. But switching over to self-driving cars will result in a lot fewer kids (and adults) getting killed than human-driven cars do now.

8 hours ago, Lordsteve666 said:

I guess really long term you're aiming for some sort of utopia where the machines do all the nasty stuff and we just kick back and live luxurious lives, but the in between part is the bit people worry about.

The problem is capitalism. Under a sane economic system, automation would result in a steady decrease in working hours for everyone (non-automated jobs would be shared by more people) with no loss of pay, but under capitalism it just means lots of people ending up unemployed and a handful getting even more incredibly wealthy.

Link to comment
Share on other sites

One other thing, at the danger of stepping on the Americans’ toes:

Bias is inevitable. Bias is the whole idea. An algorithm that would be unbiased would make downright terrible decisions. The algorithm isn’t broken if it is biased—instead, it works according to specifications. All inference is bias, all deduction is bias.

Being wrong is a problem. Being biased is not. Some may think that bias is a moral problem, but this is an epistemological non-starter. It posits that all groups, no matter how you classify (age, sex, height, body mass index, wealth, zip code, nationality), are identical in needs, abilities, etc. That position is not only false, it is also non-operational. You cannot make any decisions based on data if you don’t bias, but data-driven decision making was the whole point of the exercise (as opposed to the only unbiased method: random decision making.)

I understand the instinctive aversion to this situation. In particular, I understand the objection “Yes, but what I meant was unfair bias, or wrong generalisation, or …” But these are all non-starters. The effect of correct bias is just as unfair (on the individual recipient) as of incorrect bias. Whether you are unfairly treated as a child or fairly treated as a child (given you are a child) makes no difference: you are treated as a child. The system correctly infers biases based on (in this case) age, and you can replace by lots of other variables, including taboo ones.

The O’Neill book has a good example about zip codes. The US allows discrimination based on zip code. It disallows discrimination based on race, for historical reasons. So algorithms (and people) discriminate based on zip code, which makes wonderfully correct predictions. In fact, better predictions that race. (Race predicts socioeconomic status well, but zip code does it better.) Now, both biases (the legal one and the illegal one) are factually correct (in that they are statistically sound). They work exactly as we want them to work. They discriminate (which is what they were designed to do) based on correct biases. That was the point. Of course, if you are targeted by these decisions (whether inferred by a human or a piece of code), you are just as screwed. If you failed to choose your parents wisely, you are subject to a completely correct and biased decision that you had little choice on.

(The solution is of course principled Equality Before the Law, but that is a moral-political decision equally abhorrent to the Right and Left, and one that is orthogonal to this thread, because it has little to do with algorithms.)

But bias is exactly what we want from these algorithms. And the world is ugly and terrible biased, so the better the algorithms are, the more biased they will be. This is not in itself the problem. The problem arises from which decisions we base on the results.

Link to comment
Share on other sites

For cars (and in local context).

1) Speed limits are determined by car-environment interaction as it is (sound, visibility, environment etc), no reason to change that for AI. Tailgating can be more fuel efficient IIRC, so 'trains' of self-driving cars seem inevitable. The issue will be how these will influence traffic flow.

2) Normal liability. EG in an accident involving a weaker participant in the traffic (pedestrian, cyclist etc) the car is always liable. Which means this will be an economical rather than ethical issue I am sorry to say.

For taxes

1) seems like a great idea to be honest. Even though it will probably clash with local privacy protection. Again a legal matter rather than practical.

For criminality

1) Possible to some extent,

2) However, since AI models will be based on their input. And of course given the current climate that will be garbage in, so garbage out.

3) Yes.

 

Link to comment
Share on other sites

On 11/21/2016 at 6:10 PM, Mlle. Zabzie said:

That is human nature in any event.  In fact, when you drive you will prioritize your own safety over your passengers' - so it would simply codify existing behavior and what would happen in an override scenario in any event IMO.

You'd think, but some people do get caught up in utilitarian land where all sorts of hypotheticals are considered (I remember seeing one about potentially doping parents to keep them together)

 

Link to comment
Share on other sites

I think the obvious place for looking at job losses is also the one that has had the most time already spent discussing it, which is in manufacturing, but other industries are even less prepared for a collapse in demand for workers. How many problems is it going to cause when 75% of lawyers suddenly aren't needed? Software is already at a point of being at least as capable for an awful lot of their tasks, and unlike manual labour they've invested years and probably more than a mortgage (US averages here) in their education which is suddenly worthless.  There is still going to be demand for some, I figure anyone that actually goes into court and makes arguments to other people for example, but there is a lot of it that is just going to disappear.

Another example of where this stuff is already being used in (at times) concerning fashion is facial recognition and things like border control. I saw last week that a Canadian woman en route to a conference in the US was denied entry because she was flagged as a sex worker by facial recognition software, and thus denied entry on a flimsy assumption of intent to break the law. Privacy of movement is going to disappear completely with this as it gets more sophisticated and has access to greater data streams.

I don't have much hope for effective regulation on the use of harvested data for things like medical insurance premiums in the US when so many rules are structured to serve business not the people.

Link to comment
Share on other sites

15 hours ago, karaddin said:

I think the obvious place for looking at job losses is also the one that has had the most time already spent discussing it, which is in manufacturing, but other industries are even less prepared for a collapse in demand for workers. How many problems is it going to cause when 75% of lawyers suddenly aren't needed? Software is already at a point of being at least as capable for an awful lot of their tasks, and unlike manual labour they've invested years and probably more than a mortgage (US averages here) in their education which is suddenly worthless.  There is still going to be demand for some, I figure anyone that actually goes into court and makes arguments to other people for example, but there is a lot of it that is just going to disappear.

Another example of where this stuff is already being used in (at times) concerning fashion is facial recognition and things like border control. I saw last week that a Canadian woman en route to a conference in the US was denied entry because she was flagged as a sex worker by facial recognition software, and thus denied entry on a flimsy assumption of intent to break the law. Privacy of movement is going to disappear completely with this as it gets more sophisticated and has access to greater data streams.

I don't have much hope for effective regulation on the use of harvested data for things like medical insurance premiums in the US when so many rules are structured to serve business not the people.

Accounting is another good example of what you bring up in your first paragraph; a profession that is severely threatened with automation in the near future, where the current practitioners have both invested considerable amounts of time and money into getting their degrees, and are used to high living standards as a result. Here is a list of the jobs that are estimated to be the most and least threatened by the way, based on a 2014 Oxford study. A pretty interesting read: http://www.thisismoney.co.uk/money/news/article-2642880/Table-700-jobs-reveals-professions-likely-replaced-robots.html 

The general gist of it is that the more routine tasks a job consists of the more likely it is to be robotized, and vice versa. 

Link to comment
Share on other sites

On 11/24/2016 at 11:36 PM, Commodore said:

the question you should be asking is not whether these technologies should be stopped/controlled, but whether they can be

forget self-driving cars, self-owning cars are not far off

Well, what do you think?  Do you think we have Skynet in our future?  If so, what should we do about it, if anything?  And then there are all those interesting questions that Sci Fi loves to play with regarding the ethical implications of self-aware AI.  Someone up thread did, however, correct me somewhat and I accept the correction - what I'm focused on now has a lot to do with big data and self-predicting AI.  It's not the Terminator....yet.  Thoughts?

On 11/25/2016 at 11:55 AM, Khaleesi did nothing wrong said:

The general gist of it is that the more routine tasks a job consists of the more likely it is to be robotized, and vice versa. 

yes - mechanization of routine tasks has now been occurring over centuries, and will continue.  The question in my mind is what will the next generation of humans with more leisure time do to fill it?  Something will happen, and we, as a society, will start to value things that we don't even know we want right now. I just don't know what those things are yet.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...