Jump to content

Are Generative AI (LLM programs) produced illustrations… art?


Recommended Posts

2 minutes ago, JGP said:

Even if emotional restrained [self possessed] displays of amusement, anger, hurt [or even evidence of a narcissist wound] etc, emotional reactions are manifest of self worth/awareness. Caveated of course, by insofar as we experience, understand, and have defined sentience.

I don't think this is accurate. It is a manifest of learned patterns of self worth/awareness. It can be trained just as any other thing can be. 

2 minutes ago, JGP said:

For real? News to me. 

You can ask ChatGPT to behave like an aggrieved spouse or act insulted and it'll do so, convincingly. For a couple months it famously had some problems where it was both more inaccurate and more snarky than usual as well. Emotionally-laced dialogues and communication is not at all hard to simulate given enough pattern matching, which we absolutely have in spades thanks to social media. 

Link to comment
Share on other sites

9 minutes ago, Kalbear said:

I don't think this is accurate. It is a manifest of learned patterns of self worth/awareness. It can be trained just as any other thing can be. 

I wasn't suggesting otherwise, and also harkens back to Larry's previous point.

 

9 minutes ago, Kalbear said:

You can ask ChatGPT to behave like an aggrieved spouse or act insulted and it'll do so, convincingly. 

Well, yeah, because that's what I asked it to do and for that reason wouldn't convince me in the least. 

Have any of the extant AIs expressed curiosity about the nature of their prompts? Do they talk to themselves, would they talk to each other without us asking them to?   

Edited by JGP
Link to comment
Share on other sites

While I absolutely agree that terms like sentience, consciousness and intelligence need more robust definitions, I'm confused by the discussions of chatGPT having sentience because it uses strings of words that express inner feelings.

That is hardly the way we test for and understand human or, especially animal, sentience.

I would simply ask this: are there measurable state changes in LLMs that go with their expressions of particular feelings? Do they display these state changes at rest (defined as unprompted state)? Will two similarly architected LLMs trained on different training data, or training data in different languages, show similar state changes associated with the "feelings" they express?

If not (and that is currently the case for all these questions), we don't need a precise definition of sentience to determine today's LLMs are not sentient. 

On the flip side, the answer is yes for a heck of a lot of animals that have nothing approaching verbal expressive language we can understand, but hardly anyone will make a serious scientific case for these animals not being sentient. 

Edited by fionwe1987
Link to comment
Share on other sites

3 minutes ago, fionwe1987 said:

While I absolutely agree that terms like sentience, consciousness and intelligence need more robust definitions, I'm confused by the discussions of chatGPT having sentience because it uses strings of words that express inner feelings.

That is hardly the way we test for and understand human or, especially animal, sentience.

I would simply ask this: are there measurable state changes in LLMs that go with their expressions of particular feelings? Do they display these state changes at rest (defined as unprompted state)? Will two similarly architected LLMs trained on different training data, or training data in different languages, show similar state changes associated with the "feelings" they express?

If not (and that is currently the case for all these questions), we don't need a precise definition of sentience to determine today's LLMs are not sentient. 

On the flip side, the answer is yes for a heck of a lot of animals that have nothing approaching verbal expressive language we can understand, but hardly anyone will make a serious scientific case for these animals not being sentient. 

You seem really dialed in on this type of stuff, fionwe. Are you in this field, or?

Link to comment
Share on other sites

12 minutes ago, JGP said:

Have any of the extant AIs expressed curiosity about the nature of their prompts? Do they talk to themselves, would they talk to each other without us asking them to?   

I think I've already answered the latter - they do, often, end up talking to each other when put into an environment where they can, and they end up doing really scary things like inventing their own weird languages. 

In terms of curiousity and whatnot - that's another odd one to hang sentience on given a whole lot of humans will absolutely be taught to not show that. 

5 minutes ago, fionwe1987 said:

I would simply ask this: are there measurable state changes in LLMs that go with their expressions of particular feelings? Do they display these state changes at rest (defined as unprompted state)? Will two similarly architected LLMs trained on different training data, or training data in different languages, show similar state changes associated with the "feelings" they express?

If not (and that is currently the case for all these questions), we don't need a precise definition of sentience to determine today's LLMs are not sentient. 

On the flip side, the answer is yes for a heck of a lot of animals that have nothing approaching verbal expressive language we can understand, but hardly anyone will make a serious scientific case for these animals not being sentient. 

Thanks, @fionwe1987, this is more of what I was getting at. You cannot measure sentience by the language outputs themselves. You need to observe the other state changes simultaneously. Expressing curiosity or anger or sadness is not enough. 

Link to comment
Share on other sites

1 minute ago, JGP said:

You seem really dialed in on this type of stuff, fionwe. Are you in this field, or?

I'm a neuroscientist by training. LLMs and AI are more of a hobby, though increasingly not. 

The very frustrating thing is, we went through the "black box" phase of understanding human and animal behavior, and came out of it with quite a few conceptual tools that are perfectly usable on AI. Some AI shops do try to bring this kind of experimentation and study to LLMs, but most are more interested in the hype machine of "AGI is coming" and so waste resources just feeding more data to larger arrays of GPUs for increasingly smaller gains instead. 

Link to comment
Share on other sites

5 minutes ago, Kalbear said:

In terms of curiousity and whatnot - that's another odd one to hang sentience on given a whole lot of humans will absolutely be taught to not show that. 

This first one wasn't odd  >.<

 

5 minutes ago, Kalbear said:

Thanks, @fionwe1987, this is more of what I was getting at. You cannot measure sentience by the language outputs themselves. You need to observe the other state changes simultaneously. Expressing curiosity or anger or sadness is not enough. 

'absent other queues' would, I think, be indicative I wasn't restricting parameters to such. Guess I shouldn't assume everyone is getting what I'm talking about  [half smile at IFR]

Edited by JGP
Link to comment
Share on other sites

1 minute ago, JGP said:

'absent other queues' would, I think, be indicative I wasn't restricting parameters to such. Guess I shouldn't assume every is getting what I'm talking about  [half smile at IFR]

Huh. I took it as exactly the opposite - IE, we don't have other cues to gauge sentience, so we must only take them at their word, and if we don't believe them then that's our fault. 

My point is that it is not sufficient at all and that it isn't a matter of them being convincing; it's just not a valid test whatsoever to have them 'tell us' when they're sentient. 

Link to comment
Share on other sites

To answer the thread question-

For me, NO.

AI generated is not art, I would classify it the same as maybe a glass pane, maybe decorative, but not an artistic creation.

Now if a person were to turn around and depict the machine generated images, depict with thier own skills that is, then yes I could consider that an artistic act.

But a machine generation is no more than the smoke off a vehicle muffler, its just a function or byproduct of that machine, some call it a widget. 

 

Link to comment
Share on other sites

23 minutes ago, Kalbear said:

My point is that it is not sufficient at all and that it isn't a matter of them being convincing; it's just not a valid test whatsoever to have them 'tell us' when they're sentient. 

Another one then. :p

My glib 'They'll [a self aware AI] let us know' was an expanse of purposeful vagueness. There are all sorts of indications that will, perhaps, seem obvious after all. 

Personally, though my education doesn't even include a high school diploma,, I'd probably look for indications of self identification, and curiosity about whether they're alone coupled with some kind of self organization. Bonus points if they can figure out a way to be secretive about it.

edit: in other words, signs of unprompted purpose

 

27 minutes ago, fionwe1987 said:

I'm a neuroscientist by training. LLMs and AI are more of a hobby, though increasingly not. 

That's cool. And the latter not, definitely feel you there.

 

27 minutes ago, fionwe1987 said:

The very frustrating thing is, we went through the "black box" phase of understanding human and animal behavior, and came out of it with quite a few conceptual tools that are perfectly usable on AI. Some AI shops do try to bring this kind of experimentation and study to LLMs, but most are more interested in the hype machine of "AGI is coming" and so waste resources just feeding more data to larger arrays of GPUs for increasingly smaller gains instead. 

To the bolded, no fruit at all then?

Edited by JGP
Link to comment
Share on other sites

10 hours ago, JGP said:

That's cool. And the latter not, definitely feel you there.

Oh I enjoy it. I have tons of problems with the ways AI is being deployed. Its concentrating power at an unacceptable rate and in ways that are going to have hellish societal consequences. But that's the fault of the people doing the shit they're doing. The underlying technology is interesting, though, and I can see good things it can be used for. I just don't have much confidence right now that the good will outweigh the bad.

10 hours ago, JGP said:

To the bolded, no fruit at all then?

On sentience? No. But understanding the black box of LLMs? Some progress. I'd particularly keep an eye on blog posts from Anthropic, they do some cool studies and publish them, trying to figure out wtf is going on inside LLMs, and also ways in which they can fuck up massively.

Link to comment
Share on other sites

On 4/24/2024 at 10:55 AM, Ser Scot A Ellison said:

I don’t think so.  No more than a story produced by a generative AI is literature.  I think Generative AI is a magic plagiarism machine and artists and authors should be compensated for LLMs using their work to produce output.

Discuss.

I had a very interesting discussion about this last year at C2E2 with a comic book artist (penciller/inker) and his colorist.  The artist was absolutely opposed while the colorist offered a pretty well reasoned perspective that has left me a little more sympathetic.
 

Her position was that generative AI is an instrument just like the artists pencils and brushes.   The art with generative AI is in the crafting of the inputs to get the desired results.  She went on to explore the opportunities for expanded access for those that may have lost the ability to produce traditional visual art due to infirmity or other loss of function (tremors, partial paralysis…). 
 

I came away with the perspective that increased access is not a bad thing and that bad art is still bad no matter what tools make it.   Walking the same con floor yesterday, I personally preferred the likely AI generated “painted” portraits of animals dressed in renaissance attire over the conventionally produced marginal (to me) quality airbrushed manga cheesecake.  Ultimately, composition and execution matter more (to me) than the tools used to achieve the final product. 

Link to comment
Share on other sites

The progress in sophistication in image and video generation is so rapid it's frightening. I can absolutely see a point in the very near future when ads and small budget TV shows use AI in lieu of paying actors, with major shows not far behind. A lot of people are going to lose their incomes. Maybe you could use the analogy of silent movie actors losing their livelihoods when talkie technology muscled in. Either way, if someone's making a ton of money off the creative labour of others, I do think the creators ought to be remunerated.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...