Jump to content

A question to capitalists; how much should we fear AI?


Varysblackfyre321

Recommended Posts

 

6 hours ago, Varysblackfyre321 said:

You know the prospect of Self-driving automobiles  poses an interesting dilemma that’s often overlooked and has had people trying to grapple with for quite a while; that is in a situation where lives are at stake should the AI prioritize the safety of the passenger or seek the option that will do the least damage to everyone around vehicle. 

I'll have to disagree on this one. It's a super hypothetical scenario which has little to no basis in reality. We're talking about a situation where something sudden happens, like a kid runs over the road or something, and the car CAN steer, just not well enough to avoid hitting the kid, unless it hits a grandma instead or topples the car over, risking the life of its passengers... I place this dilemma in the same category as the old railroad cart question - useful as an ethical gedankenexperiment, but not a scenario anyone is likely to ever face.

There are tons of ethical issues around self driving cars that are real and will happen. Can you trust your life to a machine? Is it worse to be killed by a faulty computer than by a human that made a mistake? Can you accept just a little lower number of accidents than today or does it have to be a factor of ten or more better than for human drivers before you allow computer drivers? At which point, if ever, do we forbid human drivers altogether? And so on, and so forth. I feel that the question of "who should the car kill" has gotten far too much attention and distracts from the real issues.

Link to comment
Share on other sites

On 1/21/2019 at 9:27 AM, larrytheimp said:

I like the self-checkout at the grocery store, I know they will replace cashiers eventually (most stores around here still have a cashier montiroing four self checkout stations), but I help out the labor market by leaving carts all over the parking lot so they need to hire someone to collect them.

plu 4011 baby

On 1/24/2019 at 11:45 PM, Altherion said:

I suspect DARPA is going to want to have a word with DeepMind.

lol pal, i’ve got some bad news about how the dod is weaponizing machine learning

https://interc.pt/2S6hO0l

this is some wild and scary shit, especially once you consider the unstated inference that companies like google are likely using far more than just these “crowd workers” to refine the ability of these algorithms to accurately recognize and identify objects in the real world (think of all those “select the images which contain...” captchas you’ve gone through when signing up for a new site or online service) 

https://anatomyof.ai/

on how the the insidious nature of these ‘products’ allows big tech to further commoditize every aspect of our lives through the most seemingly mundane interactions 

https://www.currentaffairs.org/2018/11/what-you-have-to-fear-from-artificial-intelligence

and a great read by a comrade on how the potential for abuse goes far beyond the relatively passive parasitization of simply collecting and exploiting our behaviors, etc, and likely will be (is being?) used to actively defend the power structures of the capital class

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...