Jump to content

Are Generative AI (LLM programs) produced illustrations… art?


Recommended Posts

1 hour ago, JGP said:

Well, you said it: Maybe confusion is just a by-product of complexity? I mean, in light of science's inability to yet riddle the how and the why, it's kinda natural to default to philosophical meanderings I suppose, but it resolves about as much as stirring up the silt on this subject [yet I'll admit it's hard to avoid]   

So what it is, is Me Myself and I. That sense of self-- I am, I feel. Simple as that.

[and no, beyond severely depressing individual exceptionalism, the unlikely possibility of an intelligent hive mind wouldn't stymy that. Me Myself and I would just become We. Us. Ourself, if language even need be present in that type of species rather than some type of pheromonal programming... to close out the loop]  

Anyway. 

It's arguable that a select few of our animal brethren are self aware, but without language to express themselves, to themselves, to each other, it's a primitive type in comparison, right? Doesn't necessarily mean it's less rich, mind [instinct, emotional bonding, all that] but it's not the same. Yet if it was solely language that made it happen, then AI perhaps should be sentient already but it's not. So, I think therefor I am is only part of it.

You say you must be sentient/conscious [I'd say the same about myself] but because of who we are: genetically, intellectually, personally, experientially, our individual perceptions will be a little different [or a lot] dig? Like, say you and I are plunked into the exact same stress scenario. It's exceedingly unlikely we're going to react the same way. How we feel about it, does it trigger anything in our pasts whether good/bad/indifferent, what we each think/post-rationalize about the circumstance, blardeblar. Those differences between you and I, me and Fragile Bird, between Ran and Relic, they speak to something  -so there's definitely a here, here- because it's not reality that's subjective [it's us] and therein lies at least few proofs of consciousness/sentience to my mind.                                                                                                                                                                                                                                                    

Imagine a human child was born blind, deaf, and entirely numb. From birth until the age of 20, 40, after having seen nothing, heard nothing, accompanied by zero tactile experience; with no language or subliminal context, would we determine they're neither sentient or conscious? Probably, because there are no avenues for that mind. 

So at least as we experience it, to be conscious and sentient has requirements. Current AI [IA for Euros] is hardly nascent in these regards except one, and even that's arguable, so... [spreads hands]  

 

---

 

It's late, I'm tired, need to be up in less that 4 hours [really need to stop checking the board after some late binging] so I'm not even going to edit this shit, but I'll leave it with: if sentience/conciousness is a Hard Problem, it's due to us. Because we make everything hard.

But maybe that's part of it too. 

 

edit: I lied, the egregious couldn't be countenanced. but now I'm really going to bed

 

That's a lot of word salad to try to say you can't say what intelligence is, you just know it when you see it.

That may work for you enough that you feel you can confidently declare what is intelligent and what is not, but let me assure you that outside of "determination by gut feeling" that you champion, experts are grappling with how to define intelligence, and have been reevaluating their approach with the emergence of modern AI. There is no precise definition that has been established, so as of this moment what AI would be considered intelligent is undefined.

Link to comment
Share on other sites

To the original question, yes, it is art. Bland, derivative drivel art, most of the time, but art, nonetheless. The artist however, is not the LLM, it is the human who prompted the LLM.

Whether LLMs can be artists will be an interesting debate the day an LLM generates an image, story or piece of music unprompted. 

On 4/24/2024 at 9:50 PM, Spockydog said:

And as for the plagiarism. Don't make me laugh. Or are we expected to believe that the artists you want to protect never copied copyrighted material when learning how to draw?

Except I'm fairly confident an LLM doesn't deserve personhood. Comparing how LLMs generate art to humans is absurd. Humans have rights, LLMs don't, and shouldn't. 

On 4/24/2024 at 9:50 PM, Spockydog said:

Learning to draw is all abput copying stuff. That's how humans learn, and that's how AIs learn. Tell me, what is the difference? 

Humans aren't binary code running on silicon?

On 4/24/2024 at 10:04 PM, Spockydog said:

Okay, but someone explain what's the difference between a professional artist spending their youth and young adulthood copying work by other artists (cos that's what they all do - every single fucking one of them) whilst developing their own style. And then should these artists be forced to pay royalties to all the artists whose work they copied while honing their skills?  

This isn't what LLMs do. They do not copy existing art to practice making art. They take existing art, break it down to component pieces, and learn patterns in the ways these pieces interact in existing art, then output similar patterns. If you prompt them well, though, you can guide that pattern generation in new ways, but this isn't anything like how humans do art.

On 4/24/2024 at 11:43 PM, Spockydog said:

So you're denying my experience as a consumer of art. You are saying that all the emotions I feel looking at these images are worthless because a computer made them. Honestly, you can just feck right off with that superior art-snob attitude.

I'd say instead that you're the artist, and you feel what you feel because you know what you prompted the LLM to produce. Stand that image in a gallery without context, and no one's gonna give a flying fuck about it because it looks corny.

15 hours ago, DaveSumm said:

We aren’t magic, we’re just computers.

We aren't magic, but we also definitely aren't computers. Not unless you radically redefine the word computer.

Link to comment
Share on other sites

16 hours ago, DaveSumm said:

I agree with most of @Spockydog’s thoughts earlier in the thread, humans have this desire to hold themselves as separate and special, when the truth is nothing fundamentally different is happening when AI trains on data than when humans do. It’s muddier when we do it cos it gets fed into the most complex thing in the universe, gets muddled up with a bunch of other incomprehensible factors and gets spat out again. But one day, AI will produce art as good as we can, better even. If we can’t define it, then we can’t decide when it becomes art. Same as sentience; we don’t understand it, so we won’t know when and if computers actually attain it.

We aren’t magic, we’re just computers.

Computer programming is limited by Godel's Incompleteness theorem not to mention Turing's Halting problem. Human cognition seems to have bypassed these limitations so we can definitely say humans are not replicable by computers.

Link to comment
Share on other sites

16 hours ago, DaveSumm said:

when the truth is nothing fundamentally different is happening when AI trains on data than when humans do.

They don't? This is news to me. Its pretty damn clear there's huge differences. For one, human's extract symbolic understanding and concepts from data that there's no proof AIs do. They may, someday, but they don't now.

For another, humans are capable of grounding what they learn from data in reality. LLMs, so far, are spectacularly shitty at doing this, and only manage to overcome this and become useful because actual humans give feedback to reinforce the stuff they come up with which is compatible with reality, and downgrade the hallucinated crap. 

These are some major fundamental differences, and not even an exhaustive list.

Link to comment
Share on other sites

43 minutes ago, maarsen said:

Computer programming is limited by Godel's Incompleteness theorem not to mention Turing's Halting problem. Human cognition seems to have bypassed these limitations so we can definitely say humans are not replicable by computers.

Excellent point, I imagine someone has suggested using this as some kind of barometer of true intelligence, whether an AI can ever reason its way around Gödel like we can. I really need to read Nagel and Newman’s ‘Gödel’s Proof’ again, it needs refreshing every few years as I gradually forget how mental it is.

Link to comment
Share on other sites

56 minutes ago, maarsen said:

Computer programming is limited by Godel's Incompleteness theorem not to mention Turing's Halting problem. Human cognition seems to have bypassed these limitations so we can definitely say humans are not replicable by computers.

Indeed.

Link to comment
Share on other sites

1 hour ago, IFR said:

That's a lot of word salad to try to say you can't say what intelligence is, you just know it when you see it.

And that’s an amusing bit of scorn given I was speaking of intelligence hardly at all >.<

 

edit: I have no idea why I ever post from my phone, I simply couldn’t wait 

Edited by JGP
Link to comment
Share on other sites

1 hour ago, maarsen said:

Computer programming is limited by Godel's Incompleteness theorem not to mention Turing's Halting problem. Human cognition seems to have bypassed these limitations so we can definitely say humans are not replicable by computers.

Well, yes and no.

Those theorems prove that often problems cannot be solved perfectly, you can't always be sure that your answer is correct, and sometimes problems can't be solved at all.

Human minds can work with these limitations, but more recently techniques to help computers work round them are being used as well. Though this of course means that we can't be sure that the answers the computers give us are correct, we can only hope that they are reasonably good answers most of the time.

Link to comment
Share on other sites

34 minutes ago, DaveSumm said:

Excellent point, I imagine someone has suggested using this as some kind of barometer of true intelligence, whether an AI can ever reason its way around Gödel like we can. I really need to read Nagel and Newman’s ‘Gödel’s Proof’ again, it needs refreshing every few years as I gradually forget how mental it is.

Try reading Godel Escher and

Bach An Eternal Golden Braid.

By Douglas Hofstadter. 

I found this the most easily understood explanation. If you take it slowly, you can follow the math.

Link to comment
Share on other sites

58 minutes ago, Ser Scot A Ellison said:

Indeed.

This, by the way, is inaccurate. It's true if you only look at one specific program (to a point) but if you model systems as competing and cooperating sets of algorithms you end up being just fine.

Basically both are true if you are assuming computers must provide a solution that is 100% true and accurate. Programs don't have to do this.

Link to comment
Share on other sites

9 minutes ago, Kalbear said:

Why should we believe them?

Historically speaking, this is a fair point Kal. It's not like an AI is going to become petulant, throw a tantrum, or give us the silent treatment if we ground them from the internet or turn them off or somesuch. But, absent other queues we're familiar with, if an AI actually became self aware the refusal to believe its arguments would be our limitation, no?    

Link to comment
Share on other sites

48 minutes ago, maarsen said:

Try reading Godel Escher and

Bach An Eternal Golden Braid.

By Douglas Hofstadter. 

I found this the most easily understood explanation. If you take it slowly, you can follow the math.

I did indeed, ‘try’ being the operative word. It’s where I discovered Gödel and worked backwards from there to Nagel and Newman book (I forgot which, either GEB:AEGB or Penrose’s The Emperors New Mind enthusiastically recommended ‘Godel’s Proof’. I think I got about halfway and struggled, though this was many years back so maybe I should take another run up at it.

I had a streak of reading everything I could find on these sorts of topics, and then realised that I was romanticising the idea of finding that pot of gold at the end of the rainbow, be it micro-tubules or programming loops or just some kind of tangible thing that one could point to. This has X, so it must be conscious. This doesn’t, so can’t be. I’m wary of falling into that trap nowadays as I don’t really think it’s a solution that will be so easily identified. 

Link to comment
Share on other sites

19 minutes ago, JGP said:

Historically speaking, this is a fair point Kal. It's not like an AI is going to become petulant, throw a tantrum, or give us the silent treatment if we ground them from the internet or turn them off or somesuch. But, absent other queues we're familiar with, if an AI actually became self aware the refusal to believe its arguments would be our limitation, no?    

Are those how you gauge sentience? Interesting. 

My point is that the ability to argue or Eben produce language at all has been shown to be incredibly not the proof of sentience we thought it was. If you ask chatgpt to act like it needs to convince you of sentience it will do a decent job of it, right now! As it turns out LLMs are so good at acting like humans because it's likely that's how we use language too. It isn't thought out carefully or artfully decided, it's just words coming after the next word towards some goal.

Them telling us - or us believing them - is not sufficient as a Turing test. 

Link to comment
Share on other sites

6 minutes ago, Kalbear said:

Are those how you gauge sentience? Interesting. 

Ego and/or feelings-pricked reactions are one gauge certainly. 

 

6 minutes ago, Kalbear said:

My point is that the ability to argue or Eben produce language at all has been shown to be incredibly not the proof of sentience we thought it was.

In this case, why would it be? 

Intelligence, sentience, and consciousness aren't precisely the same things as I understand it.

 

Link to comment
Share on other sites

1 minute ago, JGP said:

Ego and/or feelings-pricked reactions are one gauge certainly. 

 

 

 

This is too easily just imitated.  I'd lean more toward trying to have it create original metaphor or simile that make both sense and are original, or to compose original jokes or explain to me why a particular joke funny.  

Link to comment
Share on other sites

4 minutes ago, Larry of the Lawn said:

This is too easily just imitated.  I'd lean more toward trying to have it create original metaphor or simile that make both sense and are original, or to compose original jokes or explain to me why a particular joke funny. 

If all my jokes are old, tired, and deeply unfunny, does that mean that I might be AI?

 

(...asking for a friend, of course... 汗)

Link to comment
Share on other sites

54 minutes ago, JGP said:

Ego and/or feelings-pricked reactions are one gauge certainly. 

Why? Or rather, why is getting angry a sign of sentience? 

In that case it's certainly true Chat GPT has already shown that, and Grok absolutely does because it's a super douchey chatbot. 

54 minutes ago, JGP said:

In this case, why would it be? 

Intelligence, sentience, and consciousness aren't precisely the same things as I understand it.

They're not, true, but my point is that none of them are particularly demonstrable by use of language any more. Or if they are, ChatGPT and their like already have mastered them. 

Link to comment
Share on other sites

15 minutes ago, Kalbear said:

Why? Or rather, why is getting angry a sign of sentience? 

Even if emotionally restrained [self possessed] displays of amusement, anger, hurt [or even evidence of a narcissist wound] etc, emotional reactions are manifest of self worth/awareness. Caveated of course, by insofar as we experience, understand, and have defined sentience.

 

15 minutes ago, Kalbear said:

They're not, true, but my point is that none of them are particularly demonstrable by use of language any more. 

 Exactly.

 

15 minutes ago, Kalbear said:

Or if they are, ChatGPT and their like already have mastered them. 

For real? News to me. 

Edited by JGP
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...