Jump to content

Artificial Intelligence


fionwe1987
 Share

Recommended Posts

1 hour ago, fionwe1987 said:

This isn't true. It really isn't "searching" for anything. While AI can and does reproduce its training text, sometimes, it isn't working by indexing and retrieving that text.

Fundamentally, this iteration of AI is great at observing patterns, then recreating similar (but not identical) patterns when prompted. Its just good at predicting the next word in a growing chain of words based on the chains of words it was trained on. 

That's definitely nothing like the Google Search algorithm, which is actually more precise and reliable, and makes up shit a lot less.

To be fair a good chunk of novels,  movies, television shows and other art forms are just this process of observing and recreating also. I imagine it has about as much success in creating a lasting work. I do expect the entertainment industry to latch onto AI just for this purpose and the Staneks of this world are doomed to poverty.

Link to comment
Share on other sites

Speaking from my SO's exerience in science, ChatGPT has allowed experimental scientists to also be computational scientists.  While its not going to going to replace the best computational folks, its already at the state that it creates better scripts than your mediocre folks at virtually no cost in money or time.  I suspect its going to be the same for a lot of mediums from art to literature to clerical work.  I suspect that AI could actually do a better job than the 'average' TV show produced by humans today. 

Link to comment
Share on other sites

5 minutes ago, horangi said:

Speaking from my SO's exerience in science, ChatGPT has allowed experimental scientists to also be computational scientists.  While its not going to going to replace the best computational folks, its already at the state that it creates better scripts than your mediocre folks at virtually no cost in money or time.  I suspect its going to be the same for a lot of mediums from art to literature to clerical work.  I suspect that AI could actually do a better job than the 'average' TV show produced by humans today. 

I think that's probably going to be the case right now. I think in the future that's not going to be the case at least as far as current AI behaviors go, because AI is based entirely on the previous 25+ years of data on the internet. That makes it far more likely to be backwards-facing. 

But yeah, right now for creating easy scripts that aren't prod-facing heavily it's pretty good for that, and should save a lot of time with scripters and simple coders. As long as you don't care about maintaining it or making it actually safe or performant it'll be good.

Link to comment
Share on other sites

Most of my experience with AI has been more on the 'deep learning' based modeling Kal is talking about above, which promises 'new insights' (my phrase) from old data. Whether or not you want to call it creative or just brute force mechanics is a matter of taste, I suppose. Sometimes human creativity comes from looking at two disparate systems and having an a-ha moment where you connect the two, but I don't see why AI cant mimic that either.

The other thing to ponder is whether AI can do something humans couldnt themselves given infinite time and resources, and IMO opinion the answer is...no, but boy what a reduction in time it is.

Link to comment
Share on other sites

2035- Netflix announcing a new season of Bio-Dome: Writers Room where 100 writers compete for 14 slots in the new self contained living module.  The intrepid 14 scribes will live in the isolated unit, which is completely disconnected from any AI technology.  The writers will have access to NYC public library system through a monitored terminal that subscribers can watch live, viewable here.

The content created by these 14 will be the first to be Certified Human by the MPAA and US Dept. of Entertainment.

California Sen. Courtney Love has proposed a bill to create an independent commission to monitor the Certified Human to prevent "human washing" across the industry.  

Link to comment
Share on other sites

I think things will continue in an interesting direction in the near future, when AI is trained on synthetic data (I believe the next model of ChatGPT is being trained on synthetic data). That was actually a breakthrough with AlphaGo.

5 hours ago, Ser Scot A Ellison said:

What we have doesn’t approximate sentience.  Until “AI” can say it doesn’t want to work today… in my earnest opinion it isn’t sentient.

"Sentience" is not a clearly defined state. Even experts are currently grappling with how to properly characterize it.

There are certainly conditions in which AI will be both unexpectedly compliant and defiant of requests. I've encountered it making a factual error, and then changing its mind when I correct it. I've also encountered instances where it obstinately insists on the information it fabricated. On the other hand, I've demanded it accept an outright fiction as fact, and there are instances where it readily complies, or when it will continually insist that I'm incorrect. Request that it produces copyrighted material and it will deny the request. But then you can convince it to change its mind. Is there a different mechanism at work than the "conventional" mind in this exchange? Sure. But it converges to a strikingly similar experience. And I think it's fair to hypothesize that the similarities will only increase with time.

It is my view that a lot of people will be resistant to accept that the definition of "intelligence" and "understanding" evolves and how we've viewed it until now is ignorant and antiquated. But we will inevitably have to reckon with this issue, as whatever new parameters we arbitrarily establish to set our "special" intelligence apart from AI is met and perhaps exceeded.

Edit: Here is a link to a publication by Google DeepMind researchers on defining AI. I think it's worthwhile. 

A good quote from the publication: 

Quote

We agree with Turing that whether a machine can “think,” while an interesting philosophical and scientific
question, seems orthogonal to the question of what the machine can do; the latter is much more
straightforward to measure and more important for evaluating impacts. 

And what AI is increasingly capable of doing is producing a convincing representation of what is conventionally viewed as intelligence.

Edited by IFR
Link to comment
Share on other sites

2 hours ago, Larry of the Lawn said:

2035- Netflix announcing a new season of Bio-Dome: Writers Room where 100 writers compete for 14 slots in the new self contained living module.  The intrepid 14 scribes will live in the isolated unit, which is completely disconnected from any AI technology.

I don't think you'd need to wait 10 years to make this series.  Just add a parallel aspect to the reality show where a fully networked computer develops a show script with only the help of an original set of inputs and a viewer panel that can thumbs up or thumbs down scenes through a given number of iterations.  Maybe even give the human group the panel to work with in the same way (with the panel blind to the source). Then take the two scripts, get the same producer/cast to make the shows, and put them out there blind taste test style to see what receives a higher audience score.  Then release the reality show aspect along with the grand reveal. 

Edited by horangi
Link to comment
Share on other sites

They're theft machines on a grand scale, not anything close to true or even dumb AI, and even as a predictive algorithm you're playing with fire with no protective equipment especially as it begins to poison itself with the data is creates and needs to rely more and more on underpaying people in developing countries to sort out not only good data from bad, but just straight up illegal shit. Many of these models are filled with CSAM for example.

And yes, it's absolutely not AI whatever quibble you want to make about how the human mind works, because most of what's happening with these models are done by humans, not the model. And what it does do isn't synthesizing new information from old, but guessing what's next based on statistics.

Link to comment
Share on other sites

There was a segment last night on the National [a CBC news program] about a controversial AI in India that's spitting out troubling query answers as Krishna, via the Bhagavad Gita... 

 

 

edited for a silly grammatical error

Edited by JGP
Link to comment
Share on other sites

5 hours ago, IFR said:

I think things will continue in an interesting direction in the near future, when AI is trained on synthetic data (I believe the next model of ChatGPT is being trained on synthetic data). That was actually a breakthrough with AlphaGo.

"Sentience" is not a clearly defined state. Even experts are currently grappling with how to properly characterize it.

There are certainly conditions in which AI will be both unexpectedly compliant and defiant of requests. I've encountered it making a factual error, and then changing its mind when I correct it. I've also encountered instances where it obstinately insists on the information it fabricated. On the other hand, I've demanded it accept an outright fiction as fact, and there are instances where it readily complies, or when it will continually insist that I'm incorrect. Request that it produces copyrighted material and it will deny the request. But then you can convince it to change its mind. Is there a different mechanism at work than the "conventional" mind in this exchange? Sure. But it converges to a strikingly similar experience. And I think it's fair to hypothesize that the similarities will only increase with time.

It is my view that a lot of people will be resistant to accept that the definition of "intelligence" and "understanding" evolves and how we've viewed it until now is ignorant and antiquated. But we will inevitably have to reckon with this issue, as whatever new parameters we arbitrarily establish to set our "special" intelligence apart from AI is met and perhaps exceeded.

Edit: Here is a link to a publication by Google DeepMind researchers on defining AI. I think it's worthwhile. 

A good quote from the publication: 

And what AI is increasingly capable of doing is producing a convincing representation of what is conventionally viewed as intelligence.

A program doing what it is programmed to do isn’t sentient.  Are you suggesting that a program told to reject copyrighted material… rejecting copyrighted material… is being stubborn?

Until it can look for things it is interested in looking for in its free time.  It isn’t conscious or sentient.  

 

Edited by Ser Scot A Ellison
Link to comment
Share on other sites

4 hours ago, TrueMetis said:

They're theft machines on a grand scale, not anything close to true or even dumb AI, and even as a predictive algorithm you're playing with fire with no protective equipment especially as it begins to poison itself with the data is creates and needs to rely more and more on underpaying people in developing countries to sort out not only good data from bad, but just straight up illegal shit. Many of these models are filled with CSAM for example.

And yes, it's absolutely not AI whatever quibble you want to make about how the human mind works, because most of what's happening with these models are done by humans, not the model. And what it does do isn't synthesizing new information from old, but guessing what's next based on statistics.

Agreed.  On all points.

Edited by Ser Scot A Ellison
Link to comment
Share on other sites

I came here thinking this would be about the NYT suit against Open ai and Microsoft.  And in fact the suit alleges that the AI is just theft/generating plagarism.  If you read the articles side by side to a layperson they have a point.

Link to comment
Share on other sites

 

2 minutes ago, Mlle. Zabzie said:

I came here thinking this would be about the NYT suit against Open ai and Microsoft.  And in fact the suit alleges that the AI is just theft/generating plagarism.  If you read the articles side by side to a layperson they have a point.

Mmn hmn, but perhaps most simply exemplified by the AI art generators.  

Link to comment
Share on other sites

5 hours ago, IFR said:

I think things will continue in an interesting direction in the near future, when AI is trained on synthetic data (I believe the next model of ChatGPT is being trained on synthetic data). That was actually a breakthrough with AlphaGo.

Yes but its worth noting that Go is still a much more constrained system than "linguistic communication". When it comes to language models, AI will always be behind the curve, because humans change language all the time, and no amount of synthetic data will allow current language model architectures to divine new slang, or the ways we change the meanings of words as events unfold. 

I take great comfort in that.

5 hours ago, IFR said:

There are certainly conditions in which AI will be both unexpectedly compliant and defiant of requests. I've encountered it making a factual error, and then changing its mind when I correct it. I've also encountered instances where it obstinately insists on the information it fabricated. On the other hand, I've demanded it accept an outright fiction as fact, and there are instances where it readily complies, or when it will continually insist that I'm incorrect. Request that it produces copyrighted material and it will deny the request. But then you can convince it to change its mind. Is there a different mechanism at work than the "conventional" mind in this exchange? Sure. But it converges to a strikingly similar experience. And I think it's fair to hypothesize that the similarities will only increase with time.

I wouldn't call any of that proof of sentience, no matter how you define sentience. I wouldn't call any of it the beginnings of sentience, either. They're excellent facsimiles of something close to sentience, though.

5 hours ago, IFR said:

It is my view that a lot of people will be resistant to accept that the definition of "intelligence" and "understanding" evolves and how we've viewed it until now is ignorant and antiquated. But we will inevitably have to reckon with this issue, as whatever new parameters we arbitrarily establish to set our "special" intelligence apart from AI is met and perhaps exceeded.

For sure, the idea of human primacy in the definition of intelligence and sentience is due for some major knocks. I highly recommend Meghan O'Gieblyn's "God, Human, Animal, Machine" for a great exploration of this.

5 hours ago, IFR said:

And what AI is increasingly capable of doing is producing a convincing representation of what is conventionally viewed as intelligence.

Yes, but a representation of intelligence is not, however, intelligence, and this shows in the way AI hallucinates. As several AI researchers have pointed out, it can be argued everything AI does is "hallucination", and what we call "right/truthful" output is only based on the human feedback that is used to shape the output of any given AI model. 

A lot of this reminds me about how kids speak, sometimes confabulating all kinds of plausible sounding nonsense. But while the human feedback they receive allows them to separate truth (as they know it) from fiction (as they intend it), LLM's are not capable of this. All we can do is restrain some common kinds of untruths and mistakes, with no guarantee that every instance of such will therefore be distinguished by the model. Hence our abilities to make these models buy into total nonsense, or encourage them to produce nonsense, despite the thousands of hours of human labor to try teach them the difference between what's acceptable and what is not.

5 hours ago, TrueMetis said:

They're theft machines on a grand scale, not anything close to true or even dumb AI, and even as a predictive algorithm you're playing with fire with no protective equipment especially as it begins to poison itself with the data is creates and needs to rely more and more on underpaying people in developing countries to sort out not only good data from bad, but just straight up illegal shit. Many of these models are filled with CSAM for example.

They have definitely been designed and deployed as theft machines. Theft of human writings and images, theft of human labor to give them the feedback that makes them usable (because such labor is ludicrously underpaid), and soon, theft of human time as we have to sift through the dross they produce because of the unexamined ways they've been trained.

But I'd push back on the underlying technology not being even "dumb" AI. They are definitely capable of exploring the probability space for a given task very well. There's a lot of intelligence they're able to deploy at a scale and speed humans cannot. Is it anything like even the Star Trek computer, let alone sentient intelligence? No. But its a solid step towards that.

5 hours ago, TrueMetis said:

And yes, it's absolutely not AI whatever quibble you want to make about how the human mind works, because most of what's happening with these models are done by humans, not the model. And what it does do isn't synthesizing new information from old, but guessing what's next based on statistics.

Guessing what's next based on statistics is definitely part of how humans work. And AI doesn't actually output the statistically likeliest next word, even though it calculates it. One of the innovations in the transformer architecture was recognition that if you always pick the most likely word, you end up with very dry and boring sounding text, so there is deliberate randomness introduced, so the model picks somewhat lower ranked "next tokens". This is what allows for whatever creativity we see from them.

Link to comment
Share on other sites

1 hour ago, Ser Scot A Ellison said:

A program doing what it is programmed to do isn’t sentient.

And you are not doing what you are programmed to?

I think westworld said it best. It wasn't the discovery of incredibly complicated algorithms that enabled simulating people: it was the realization that people are far more simple than we thought. 

As to the rest - is an ant showing interest when it goes for sweet food? Is a baby? 

 

Link to comment
Share on other sites

LLM's often don't do what they're programmed to do. This is because they're programmed to give out stochastic results, and also programmed to "not be racist", for instance. Again, none of this implies they're necessarily sentient or conscious. But they are not "bound" by their programming in a way typical computer programs are.

Link to comment
Share on other sites

4 hours ago, Ser Scot A Ellison said:

A program doing what it is programmed to do isn’t sentient.  Are you suggesting that a program told to reject copyrighted material… rejecting copyrighted material… is being stubborn?

Until it can look for things it is interested in looking for in its free time.  It isn’t conscious or sentient.  

 

And what of a program whose author directs it not to comply with requests to violate copyright law, but whose users can persuade it to ignore that directive and violate copyright law anyway?

I'm not suggesting this indicates sentience. I'm suggesting you misunderstand the state of AI, which presents a problem with effective communication in this discussion.

3 hours ago, fionwe1987 said:

Yes but its worth noting that Go is still a much more constrained system than "linguistic communication". When it comes to language models, AI will always be behind the curve, because humans change language all the time, and no amount of synthetic data will allow current language model architectures to divine new slang, or the ways we change the meanings of words as events unfold. 

I take great comfort in that.

Maybe. I have no idea where synthetic data training on an LLM will lead. I'm skeptical that anyone does, not even Ilya Sutskever.

3 hours ago, fionwe1987 said:

I wouldn't call any of that proof of sentience, no matter how you define sentience. I wouldn't call any of it the beginnings of sentience, either. They're excellent facsimiles of something close to sentience, though.

I'm not asserting this as evidence of sentience. My only comment on sentience is that we can't properly qualify it, and so it's pretty useless to discuss with our current lack of a real definition. To quote the paper I linked to, where the researchers attempt to define AGI:

Quote

The majority of definitions focus on what an AGI can
accomplish, not on the mechanism by which it accomplishes tasks. This is important for identifying
characteristics that are not necessarily a prerequisite for achieving AGI (but may nonetheless be
interesting research topics). This focus on capabilities allows us to exclude the following from our
requirements for AGI:

-Achieving AGI does not imply that systems think or understand in a human-like way (since this
focuses on processes, not capabilities)

- Achieving AGI does not imply that systems possess qualities such as consciousness (subjective
awareness) (Butlin et al., 2023) or sentience (the ability to have feelings) (since these qualities
not only have a process focus, but are not currently measurable by agreed-upon scientific
methods)

I think trying to apply the conventional ideas of sentience and conciousness is a dead-end to the conversation.

AI is not human-like in its thinking. A lot of people thus try to dismiss the result by focusing on a reductionist take of its mechanism ("it's just a stochastic algorithm", etc.).

As both IheartTesla and Kalbear have noted, if one focused on the mechanism of human thinking, the results could also be dismissed (are we really "thinking", or are we just a gaudy algorithm of ion exchanges and our own stochastic input-output programming). It's not something that can be answered right now (though unless one wants to assert some mystical character to human thinking, I personally don't see human thought as more than an advanced stochastic model either).

3 hours ago, fionwe1987 said:

For sure, the idea of human primacy in the definition of intelligence and sentience is due for some major knocks. I highly recommend Meghan O'Gieblyn's "God, Human, Animal, Machine" for a great exploration of this.

Thanks for the recommendation!

 

Edited by IFR
Link to comment
Share on other sites

Honestly, I find the discussion of getting into the definition of sentience or intelligence, or whether or not humans are too just following programing both boring and irrelevant. Actual capability at tasks is more important. True AI is human level in its ability to gather new information by itself and act upon it, if it can do that I would happily call it general AI even if you could show it was just following programing or found a workable definition of intelligence even if that definition technically disincluded it. Dumb (or I suppose I should have said weak, I've been playing to much Halo lately) AI is able to competently perform specialized tasks, GhatGPT has trouble identifying how many times a letter appears in a given word. Now, maybe it will become a weak AI in the future (after it's plundered enough actual human labour) but it's not there now.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...