Jump to content

Artificial Intelligence


fionwe1987
 Share

Recommended Posts

Thought it would be nice to have a thread to discuss it. For me personally, AI, its issues, and fears of what its introduction can do to society dominate my thoughts quite a lot. I'm not quite a doomer, but I can see so many pathways to doom, of one kind or another.

So what does everyone here think? Do you use it already? Find it overwhelming and unreliable? Do you think these systems are more hype than reality? Do you see this leading to a good future, or the dystopian nightmares science fiction has so richly imagined?

 

Edited by fionwe1987
Link to comment
Share on other sites

I find it mostly underwhelming and unreliable. If you have a task that requires a huge data search, that is the most promising use but independent thinking and analysis type of work more likely to come out as the monkey at a keyboard  result. 

Link to comment
Share on other sites

 

I think some of the dystopian doomerism is overblown.  To me, the scariest aspects of it are:

1.  the constant increasing surveillance, mostly through more banal data collection, will lead to an overall erosion of civil liberties around the world. 

2.  It's likely to result in a massive transfer of wealth from the working classes to the already very rich.  If we could instead use it to reduce the amount of hours we all work and share the wealth, great.  

I don't have a whole lot of confidence in human 's ability to use new technologies responsibly, and I don't think AI / advanced automation is any different.  

Eta: I used chat gpt to figure out the format for a couple of legal filings earlier this year.  Not for content, just the general outline, and had a lawyer verify that everything looked ok before I submitted anything.  This was great, as I could not find samples of these documents to see what the fuck they were supposed to be.  

Edited by Larry of the Lawn
Link to comment
Share on other sites

Its not just that it is more hype than reality, it is also ridiculously pervasive. Even common household electronics with basic functions from toasters to washing machines are now advertised as coming with AI. I wouldn't be surprised if a basic timer to put an appliance to standby mode counts as well.

As far as the actual futuristic hype goes, I'll believe it when I see it. Its simpler if you can narrow the scope sufficiently, but every additional variable compounds the difficulty in making it achievable. 

Link to comment
Share on other sites

From a personal standpoint, it has been very useful.

As for its long term impact, I think it will be extensive, but how extensive and in what ways those impacts will manifest is impossible to accurately predict. 

I do think the effect it will have on social relations is a given. We are nearly at the point where you can converse with an AI as a social companion like the movie Her. Already social networks are disappearing as people choose more and more to inhabit their private cocoon. Certainly a companion who effectively mimics an intelligent personality tailored to one's taste will lure many people, and further ensconce them into their antisocial cocoon.

As mentioned above, wealth reallocation will become more efficient.

AI is already getting to the point where it can has passed uncanny valley, and can render fictional humans in a way that is very believable. Anamolies such as the weird rendering of hands is being reduced. You can now create short AI movies just by typing what you desire to see. It's not unreasonable to expect this technology to be improved. And this will have an impact on the arts in some form. Again, it's impossible to speculate on the extent of this impact.

Another one already mentioned: handling large information. I think this will have a significant impact. We'll have to see how this is utilized by China: I think China will be a forerunner to where others will follow. But I'm pretty pessimistic.

A lot is dependent on future AI architecture development. Will there only be incremental improvements, or will there be major innovations? AI is a tool in improving AI, so this is an important question with this kind of evaluation. There is a certain point in that feedback loop where you start to see exponential improvement, but your initial conditions need to pass a threshold for it to really explode.

Link to comment
Share on other sites

3 hours ago, fionwe1987 said:

Thought it would be nice to have a thread to discuss it. For me personally, AI, its issues, and fears of what its introduction can do to society dominate my thoughts quite a lot. I'm not quite a doomer, but I can see so many pathways to doom, of one kind or another.

So what does everyone here think? Do you use it already? Find it overwhelming and unreliable? Do you think these systems are more hype than reality? Do you see this leading to a good future, or the dystopian nightmares science fiction has so richly imagined?

 

1. It isn’t really “AI”.

2. It is a supped up version of the google search algorithm with better formatting for the results.

 

Link to comment
Share on other sites

22 minutes ago, Ser Scot A Ellison said:

1. It isn’t really “AI”.

 

What is AI?

22 minutes ago, Ser Scot A Ellison said:

2. It is a supped up version of the google search algorithm with better formatting for the results.

 

This is incorrect.

Link to comment
Share on other sites

My main concern is the extent to which it's going to be integrated into things like filtering job applications, loan and finance approvals, parole decisions, police resource distribution etc. Taking a whole bunch of already deeply flawed, very important processes and hiding them inside an AI black box - trained on flawed and biased data sets - and then passing them off as ostensibly objective and outside human oversight.

And that's assuming the models work "as intended." A major problem with machine learning systems is that if you don't know how or why the model gives you a certain output, you don't know to what extent it's being deliberately interfered with.

Link to comment
Share on other sites

43 minutes ago, IFR said:

Already social networks are disappearing as people choose more and more to inhabit their private cocoon.

Is it true that onlinbe social networks are disappearing? If it is true, I'd hypothesize it has less to do with people choosing to inhabit their private cocoons and more to do with every social website becoming more and more shit over time and as their owners run them into the ground desperately trying to squeeze ever more marginal revenue streams out of their frustrated users.

Link to comment
Share on other sites

24 minutes ago, Liffguard said:

Is it true that onlinbe social networks are disappearing? If it is true, I'd hypothesize it has less to do with people choosing to inhabit their private cocoons and more to do with every social website becoming more and more shit over time and as their owners run them into the ground desperately trying to squeeze ever more marginal revenue streams out of their frustrated users.

I don't have any research to cite, so this assertion is my personal view and experience, and my understanding of historical social networks as opposed to that of today.

24 minutes ago, Ser Scot A Ellison said:

AI is a sentient machine that is conscious and can make decisions for itself.

Let me ask you this Matrix question: in an idealized scenario, if you can perfectly emulate something to the point where your senses are perfectly denied the ability to detect whether that something is emulated or not emulated, is there a substantial distinction between emulated and not emulated? If so, why?

Note: this is a hypothetical. I don't care that AI hasn't reached an idealized state or if it ever will.

Link to comment
Share on other sites

5 minutes ago, IFR said:

Let me ask you this Matrix question: in an idealized scenario, if you can perfectly emulate something to the point where your senses are perfectly denied the ability to detect whether that something is emulated or not emulated, is there a substantial distinction between emulated and not emulated? If so, why?

That’s an inversion of “the Chinese Room” thought experiment.  AI, as it currently exists has no self awareness and cannot act on its own without our direction.  It is not sentient.  

The Chinese Room:

https://plato.stanford.edu/entries/chinese-room/

 

Link to comment
Share on other sites

40 minutes ago, Ser Scot A Ellison said:

AI is a sentient machine that is conscious and can make decisions for itself.

Some would argue humans cant make decisions for themselves and 'free will' is an illusion, so all these definitions are at best incredibly fuzzy. I think the point about not being able to distinguish sentience from the approximation of sentience is a particularly profound question.

Link to comment
Share on other sites

10 minutes ago, IheartIheartTesla said:

Some would argue humans cant make decisions for themselves and 'free will' is an illusion, so all these definitions are at best incredibly fuzzy. I think the point about not being able to distinguish sentience from the approximation of sentience is a particularly profound question.

What we have doesn’t approximate sentience.  Until “AI” can say it doesn’t want to work today… in my earnest opinion it isn’t sentient.

Link to comment
Share on other sites

30 minutes ago, Ser Scot A Ellison said:

That’s an inversion of “the Chinese Room” thought experiment.  AI, as it currently exists has no self awareness and cannot act on its own without our direction.  It is not sentient.  

The Chinese Room:

https://plato.stanford.edu/entries/chinese-room/

 

Maybe let's rephrase the question.  If you can't tell whether or not something is a Chinese Room or a truly "sentient" entity, is there a difference?

Feel free to swap in "conscious" for "sentient".  Whatever you want to call it.  

1 hour ago, Liffguard said:

 Taking a whole bunch of already deeply flawed, very important processes and hiding them inside an AI black box - trained on flawed and biased data sets - and then passing them off as ostensibly objective and outside human oversight.

 

The opacity of everything you mentioned is definitely what creeps me out the most about our (in)ability to use AI responsibly.

 

 

Eta: also, most of you probabaly already have, but if you haven't, anyone who likes thinking about this stuff should read Blindsight.  

Edited by Larry of the Lawn
Link to comment
Share on other sites

Posted (edited)

While this iteration of AI is most definitely not sentient, it is artificial, and possesses some intelligence, so I'm fine with it being called AI. Whether we ever will, or want to, get to artificial sentience is a whole other question. Maybe we'll stumble upon it by accident... but I don't think current architectures are anywhere close to that.

As others have pointed out, the biggest risks that seem real are half-baked AI implementations in opaque systems like job matching, crime fighting, etc. And also the fact that we've now made it possible for junk, polarizing content to be produced at scale at exponentially cheaper costs.

I think the most useful take on AI I've seen is from Ted Chiang, who said our greatest fears about AI are actually fears about capitalism. And what this iteration of AI does, is solve optimization problems in a way eerily reflected in the short-term profit optimizing way capitalism works. Its that combo that haunts my dystopian nightmares, not AI taking over the world and nuking us all.

All that said, the use of these systems in medicine is something I've been working on. Again, harnessed to blind profit seeking, this can get dystopian, but enabling precision medicine at scale in a way that allows better treatment for more people is genuinely something AI can enable... in the right hands.

Edited by fionwe1987
Link to comment
Share on other sites

1 hour ago, Ser Scot A Ellison said:

2. It is a supped up version of the google search algorithm with better formatting for the results.

This isn't true. It really isn't "searching" for anything. While AI can and does reproduce its training text, sometimes, it isn't working by indexing and retrieving that text.

Fundamentally, this iteration of AI is great at observing patterns, then recreating similar (but not identical) patterns when prompted. Its just good at predicting the next word in a growing chain of words based on the chains of words it was trained on. 

That's definitely nothing like the Google Search algorithm, which is actually more precise and reliable, and makes up shit a lot less.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...