Jump to content

The Ethics of Artificial Intelligence


Mlle. Zabzie

Recommended Posts

AI is here.  It's all over our daily lives in ways we don't necessarily understand, and will be even more so in ways we cannot anticipate.  This raises interesting ethical implications.

 

First:  self-driving cars.

1.  Should self driving cars be permitted to exceed the legal speed limit?  To tailgate?

2.  How should self driving cars prioritize the lives of pedestrians/other cars vis a vis its passengers?

3.  What is the role of government regulation in setting these standards?  Should self driving cars be required to use a single interface to "talk" to one another?  

Second:  AI for compliance.  It was reported today in the tax press (which I cannot link because it is subscription only) that the IRS is seriously looking at AI solutions to monitor and police compliance.

1.  What, if any, sorts of limitations should be put on the AI's use of data?  Should it be permitted to use data in the public realm (e.g., public facebook posts, etc.)?  Should it be permitted to access data from banking/similar sources (though an enhanced 1099 type reporting or by simply looking at its systems)? Or should it be limited to data submitted to the IRS?

2.  Do we think that AI will be better or worse at identifying evasion than humans? (note that audit flags do currently exist)

Third:  AI for identifying criminal activity - let's use terrorism to be controversial.

1.  Is it possible to program an unbiased AI to predict terrorist type activity?

2.  Do we think it would be better or worse at identifying such activity?

3.  Are false positives a concern with respect to this identification?

4.  What restrictions, if any, should be put on the use of the data generated by the AI?

 

Discuss any or all of the above.

Link to comment
Share on other sites

1 minute ago, Arch-MaesterPhilip said:

Are we talking about removing people from the equation period or would there be some sort of overrides for when human judgement is required ? 

That's part of the question here.  And do you think human intervention would improve or degrade the performance of the AI?  I could see that going either way, particularly in the self-driving car and terrorist identifying AI examples.

Link to comment
Share on other sites

6 minutes ago, Mlle. Zabzie said:

That's part of the question here.  And do you think human intervention would improve or degrade the performance of the AI?  I could see that going either way, particularly in the self-driving car and terrorist identifying AI examples.

I don't see it as necessarily improving but adding a safeguard. I'm terrified AI becoming prevalent and human judgement being removed from the equation. There have to be situations in the case of the car and law enforcement in general where the human touch is required. 

Link to comment
Share on other sites

29 minutes ago, Arch-MaesterPhilip said:

I don't see it as necessarily improving but adding a safeguard. I'm terrified AI becoming prevalent and human judgement being removed from the equation. There have to be situations in the case of the car and law enforcement in general where the human touch is required. 

I'm actually particularly worried about this in law enforcement.  That is, the AI will have biases (implicit or explicit) as part of the programming, and then the human can use the AI as moral cover (well, the algorithm says so it must be true).  If it is easy to override in the self-driving car example, doesn't that limit the efficacy of the AI?  How long before people trust the AI and don't override irrationally?

Link to comment
Share on other sites

1 minute ago, Mlle. Zabzie said:

I'm actually particularly worried about this in law enforcement.  That is, the AI will have biases (implicit or explicit) as part of the programming, and then the human can use the AI as moral cover (well, the algorithm says so it must be true).  If it is easy to override in the self-driving car example, doesn't that limit the efficacy of the AI?  How long before people trust the AI and don't override irrationally?

My understanding of self-driving cars right now is that a licensed, non-impared driver still needs to be ready to assume control and it should stay that way. Plus cars can be hacked as it is now. I have concerns that more advanced cars will be more vulnerable to hacking.  There are going to be situations where AI should be in control but others I'm not sure. 

And I'm not sure if I want it anywhere near law enforcement, in addition to the biases of programmers you also lose empathy that you get from people.

Link to comment
Share on other sites

51 minutes ago, Mlle. Zabzie said:

AI is here.  It's all over our daily lives in ways we don't necessarily understand, and will be even more so in ways we cannot anticipate.  This raises interesting ethical implications.

 

First:  self-driving cars.

1.  Should self driving cars be permitted to exceed the legal speed limit?  To tailgate?

2.  How should self driving cars prioritize the lives of pedestrians/other cars vis a vis its passengers?

3.  What is the role of government regulation in setting these standards?  Should self driving cars be required to use a single interface to "talk" to one another?  

Second:  AI for compliance.  It was reported today in the tax press (which I cannot link because it is subscription only) that the IRS is seriously looking at AI solutions to monitor and police compliance.

1.  What, if any, sorts of limitations should be put on the AI's use of data?  Should it be permitted to use data in the public realm (e.g., public facebook posts, etc.)?  Should it be permitted to access data from banking/similar sources (though an enhanced 1099 type reporting or by simply looking at its systems)? Or should it be limited to data submitted to the IRS?

2.  Do we think that AI will be better or worse at identifying evasion than humans? (note that audit flags do currently exist)

Third:  AI for identifying criminal activity - let's use terrorism to be controversial.

1.  Is it possible to program an unbiased AI to predict terrorist type activity?

2.  Do we think it would be better or worse at identifying such activity?

3.  Are false positives a concern with respect to this identification?

4.  What restrictions, if any, should be put on the use of the data generated by the AI?

 

Discuss any or all of the above.

Self Driving Cars

I believe you can only have full true self driving vehicles when all vehicles are self driving.  I think mixing self driving with human driving cars is a recipe to disaster.

1.  a) Fully self drive,  Speed limits will be higher on motorways.  but the cars will need to stick to those requirements.    Speed limits where their are potential pedestrians need to remain the same.   Tailgating is not really an issue although the cars need to maintain a minimum distance dependent on speed and road conditions.  They can drive closer to each other much safer than Humans can.

b ) Mixed Human driven + self drive cars.   Absolutely need to stick to speed limits.  and no tailgating.   distance between cars must be greater.  this is because  humans often - "Hey if others can do it then so can I" and see nothing wrong.  also Police need to be able to immediately see which cars are breaking the rules, having different rules for Human and self drive cars make this hard.

2)  This is the hard one.  And this is why I believe we should have only self driving cars or only human driven cars.     I think they should prioritize what would cause least harm to the most amount of people.   Assuming running some pedestrians over = the same harm as taking action to avoid pedestrians and killing the occupants of the car. then prioritize the lives of whoever was in the wrong place.    If the pedestrians where in the road (unless at a designated crossing) , protect the car occupants.   If on the pavement then protect them.

 

Part 2 Terrorism

1  - Theoretically it should be possible.  Its just going to be very hard for it to be totally unbiased.

2  - It will be better at identifying the flags it been programmed / learned to spot.  Humans will be better at spotting new patterns at least at first.

3  - no provided this is just used as a tool to prompt further investigation.  Its not the only tool used, nor is just being picked by the AI program any grounds for arrest / interrogation.

4 - Lots of restrictions and safeguards (I just can't say what they should be exactly).  and human oversight.

Link to comment
Share on other sites

I'll think on this a bit. We're starting to recommend to Insurance companies the use of AI and help them build/develop that kind of capability (some already are working on it like Swiss Re with IBM Watson). There are some thoughtful questions here that I'd have to think on.

Link to comment
Share on other sites

Just now, Mexal said:

I'll think on this a bit. We're starting to recommend to Insurance companies the use of AI and help them build/develop that kind of capability (some already are working on it like Swiss Re with IBM Watson). There are some thoughtful questions here that I'd have to think on.

Thanks - would love your perspective.  I think these are really interesting and relevant issues.  They are outside my practice, basically, but I'm generally interested in ethics, so these sorts of things kick around in my brain.  I'm also a civil libertarian, so my understanding and instincts may be skewed.

Link to comment
Share on other sites

Just now, Mlle. Zabzie said:

Thanks - would love your perspective.  I think these are really interesting and relevant issues.  They are outside my practice, basically, but I'm generally interested in ethics, so these sorts of things kick around in my brain.  I'm also a civil libertarian, so my understanding and instincts may be skewed.

I tend to think about their uses in the Insurance world since that's what I do but I don't tend to ponder the ethics of it. That's why I want to noodle on it. I think AI is the future, especially in the underwriting space and that's what we're exploring right now. 

Link to comment
Share on other sites

1 minute ago, Mlle. Zabzie said:

@Pebbles thanks for the thoughtful responses.  Won't there necessarily be a transition period on self-driving cars?  I feel like they are the future one way of another.

 

Yes there will be, and it will be chaos.  I hope I wrong.   Until we are totally self driving both types of driver need to follow the same rules.   although humans will break them.   especially the maintaining distance between cars.  Human drivers will nip in those "spaces" and the self driver will keep dropping back and be forced to drive slower.

 

I think the transition will start with   self driving lanes.  (like bus lanes)   and maybe only self driving cars will be allowed to use the outside lane on a Motorway, reducing the space available to human drivers.      Then when enough people have self driving cars maybe there will be   Only self driving cars between certain hours of the day.

 

 

The ethical   Who does the self driving car kill when the indecent is caused by a Human driver.

 

Lets say Human Driver on wrong side of road causes oncoming self driving buss to swerve  avoid  Human Driver with 3 extra passengers (assume can't go off road  maybe cliff or something)  You car with just 2 people in it now has the choice of taking out 2 pedestrians, Hitting the Buss head on probably causing the death of the cars Human Passengers and many people on the bus or swerving driving off that cliff killing the occupants of the self drive but also risking hitting the human driven car with all its occupants.

   Obviously taking out the pedestrians causes the same amount of deaths as driving off a cliff, but driving off a cliff has the added risk of also taking out the Human Driven car. 

Risk based probability to me says take out the pedestrians.  since the self drive can't be sure what the Human car will do.      I think if the self drive car could guarantee that driving off the cliff would not risk collision with the Human car, then I think thats what it should do, since the  lives lost are the same, but at least pedestrians are in the correct place and will still be in the correct place after any avoiding action is taken.     but whatever the result, the lawyers win some massive fees.

 

Link to comment
Share on other sites

59 minutes ago, Mexal said:

I tend to think about their uses in the Insurance world since that's what I do but I don't tend to ponder the ethics of it. That's why I want to noodle on it. I think AI is the future, especially in the underwriting space and that's what we're exploring right now. 

The underwriting implications are really fascinating.  One question in my mind is what sorts of information should be available for this?  Should the AI be able to use any publicly available information?  Face recognition software for photos?  

Link to comment
Share on other sites

1 hour ago, Pebbles said:

 

Yes there will be, and it will be chaos.  I hope I wrong.   Until we are totally self driving both types of driver need to follow the same rules.   although humans will break them.   especially the maintaining distance between cars.  Human drivers will nip in those "spaces" and the self driver will keep dropping back and be forced to drive slower.

 

I think the transition will start with   self driving lanes.  (like bus lanes)   and maybe only self driving cars will be allowed to use the outside lane on a Motorway, reducing the space available to human drivers.      Then when enough people have self driving cars maybe there will be   Only self driving cars between certain hours of the day.

 

 

The ethical   Who does the self driving car kill when the indecent is caused by a Human driver.

 

Lets say Human Driver on wrong side of road causes oncoming self driving buss to swerve  avoid  Human Driver with 3 extra passengers (assume can't go off road  maybe cliff or something)  You car with just 2 people in it now has the choice of taking out 2 pedestrians, Hitting the Buss head on probably causing the death of the cars Human Passengers and many people on the bus or swerving driving off that cliff killing the occupants of the self drive but also risking hitting the human driven car with all its occupants.

   Obviously taking out the pedestrians causes the same amount of deaths as driving off a cliff, but driving off a cliff has the added risk of also taking out the Human Driven car. 

Risk based probability to me says take out the pedestrians.  since the self drive can't be sure what the Human car will do.      I think if the self drive car could guarantee that driving off the cliff would not risk collision with the Human car, then I think thats what it should do, since the  lives lost are the same, but at least pedestrians are in the correct place and will still be in the correct place after any avoiding action is taken.     but whatever the result, the lawyers win some massive fees.

 

Self driving lanes are probably the way to go, and at least in a lot of parts of the states there are HOV lanes that could be repurposed if we so wanted.  And my instinct on the AI will be if there is a human operator with an override, if the operator can override, she will to save her own life.

Link to comment
Share on other sites

18 minutes ago, Mlle. Zabzie said:

The underwriting implications are really fascinating.  One question in my mind is what sorts of information should be available for this?  Should the AI be able to use any publicly available information?  Face recognition software for photos?  

At the moment, it would. We're scoping out ways to use satellite imagery from google earth or data from various websites that would help inform the UW. The real benefit, at this moment, is the ability to consume a large amount of unstructured data and pull out data that align to UW guidelines set forth by the company. Then the AI would pass on the relevant information to the UW who would be responsible for the ultimate decision. That's early stages with a lot more potential but this is where we're starting and how we're introducing it. Just a 3-5% increase in a UW choosing the correct class for a risk would result in a savings of millions of dollars.

Link to comment
Share on other sites

5 hours ago, Mlle. Zabzie said:

First:  self-driving cars.

1.  Should self driving cars be permitted to exceed the legal speed limit?  To tailgate?

2.  How should self driving cars prioritize the lives of pedestrians/other cars vis a vis its passengers?

3.  What is the role of government regulation in setting these standards?  Should self driving cars be required to use a single interface to "talk" to one another?

My view is that self-driving cars should obey laws as perfectly as they are able to and should only disobey laws when doing so would be harmful (example: speeding out of way of an upcoming accident). Tailgating is not a good practice and they should normally obey the 2 second rule.

Government should set very stringent standards on autonomous driving. Among other things, they should not be allowed to 'talk' with each other or have any external bidirectional interfaces at all. 

Pedestrian vs. driver I think would have to go with liability laws in general, which means that they have an obligation to protect the drivers first and foremost. But this is one I can see going both ways.

 

Link to comment
Share on other sites

5 hours ago, Mlle. Zabzie said:

Second:  AI for compliance.  It was reported today in the tax press (which I cannot link because it is subscription only) that the IRS is seriously looking at AI solutions to monitor and police compliance.

1.  What, if any, sorts of limitations should be put on the AI's use of data?  Should it be permitted to use data in the public realm (e.g., public facebook posts, etc.)?  Should it be permitted to access data from banking/similar sources (though an enhanced 1099 type reporting or by simply looking at its systems)? Or should it be limited to data submitted to the IRS?

2.  Do we think that AI will be better or worse at identifying evasion than humans? (note that audit flags do currently exist)

I for one welcome the compliance overlords. Any data that a typical auditor would or could use should be available, including any in the public domain. I think AI will be better at catching certain kinds of evasion compared to a human, and far worse at others. In particular it'll be likely better at catching large-scale evasion of small issues and being able to see a pattern based on a lot of very small bits of data, but probably won't be able to catch more deep evasion that goes through multiple holes.

 

Link to comment
Share on other sites

5 hours ago, Mlle. Zabzie said:

Third:  AI for identifying criminal activity - let's use terrorism to be controversial.

1.  Is it possible to program an unbiased AI to predict terrorist type activity?

2.  Do we think it would be better or worse at identifying such activity?

3.  Are false positives a concern with respect to this identification?

4.  What restrictions, if any, should be put on the use of the data generated by the AI?

 

It's possible for an AI to predict all sorts of things. Depends a lot on what kind of terroristic activity you're talking about, but we already have some very deep neural nets that take a boatload of data to predict potential terrorists and cells, and that has been fairly good at seeing some people before they did anything. Not great, but decent. As I said, it's better in some ways, worse than others; it can easily identify broad patterns better, but it tends to not be able to identify new threats (say, White Nationalist terror subjects) until it has more data.

False positives shouldn't be a problem so long as the data produced isn't actionable. In other words, the AI doesn't put you on a terror watchlist. It doesn't decline your credit cards. It doesn't restrict your ability to buy weapons. It simply alerts others to the possibility, and its up to a human to investigate. 

Link to comment
Share on other sites

Self-driving cars I think will become more popular if they help drive down road deaths. If cars can judge how close they are to other vehicles, closing speeds and react much more quickly than people could, you could see an absolutely massive drop in the death rate which would be very hard to argue with.

Quote

I'm terrified AI becoming prevalent

I scrolled past this fast and read it as "I'm terrified AI becoming president", to which the only response is that at this point I would more than welcome President Skynet into the White House.

Link to comment
Share on other sites

2 hours ago, Mexal said:

At the moment, it would. We're scoping out ways to use satellite imagery from google earth or data from various websites that would help inform the UW. The real benefit, at this moment, is the ability to consume a large amount of unstructured data and pull out data that align to UW guidelines set forth by the company. Then the AI would pass on the relevant information to the UW who would be responsible for the ultimate decision. That's early stages with a lot more potential but this is where we're starting and how we're introducing it. Just a 3-5% increase in a UW choosing the correct class for a risk would result in a savings of millions of dollars.

Interesting.  How does that interface with Life Insurance?  E.g., should there be limits on medical information that the AI can access that aren't otherwise disclosed?  What about looking for word patterns or picture types on posts?

46 minutes ago, Kalbear said:

My view is that self-driving cars should obey laws as perfectly as they are able to and should only disobey laws when doing so would be harmful (example: speeding out of way of an upcoming accident). Tailgating is not a good practice and they should normally obey the 2 second rule.

Government should set very stringent standards on autonomous driving. Among other things, they should not be allowed to 'talk' with each other or have any external bidirectional interfaces at all. 

Pedestrian vs. driver I think would have to go with liability laws in general, which means that they have an obligation to protect the drivers first and foremost. But this is one I can see going both ways.

 

Why shouldn't there be any external bidirectional interfaces?  Curious.

42 minutes ago, Kalbear said:

I for one welcome the compliance overlords. Any data that a typical auditor would or could use should be available, including any in the public domain. I think AI will be better at catching certain kinds of evasion compared to a human, and far worse at others. In particular it'll be likely better at catching large-scale evasion of small issues and being able to see a pattern based on a lot of very small bits of data, but probably won't be able to catch more deep evasion that goes through multiple holes.

 

This is probably right - but the civil liberties implications of this sort of thing are fairly astounding to me.  There would be a tendency for a kind of broken windows policing potentially.

38 minutes ago, Kalbear said:

It's possible for an AI to predict all sorts of things. Depends a lot on what kind of terroristic activity you're talking about, but we already have some very deep neural nets that take a boatload of data to predict potential terrorists and cells, and that has been fairly good at seeing some people before they did anything. Not great, but decent. As I said, it's better in some ways, worse than others; it can easily identify broad patterns better, but it tends to not be able to identify new threats (say, White Nationalist terror subjects) until it has more data.

False positives shouldn't be a problem so long as the data produced isn't actionable. In other words, the AI doesn't put you on a terror watchlist. It doesn't decline your credit cards. It doesn't restrict your ability to buy weapons. It simply alerts others to the possibility, and its up to a human to investigate. 

I'm more worried about it taking the next step.  It could very well be used, in the near future as people get more comfortable with it, to do just those things.  Just because data exist doesn't mean the interpretation are true, but most people miss this.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...