Jump to content

ChatGPT - AI text


Ser Not Appearing

Recommended Posts

27 minutes ago, Ran said:

They don't contain any databases of images from which to recomposite works. Stable Diffusion and Midjourney AIs work by basically learning much the way a person learns: they see images that are described to them and form ideas of what constitutes "pop art" or "dog" or "gouache", learning the underlying "rules" behind these things, and when you ask it for a "pop art image of a dog painted in gouache" it takes a completely random noise-filled image and attempts to manipulate it over multiple iterations to create an image that it thinks conveys those ideas. 

The models they use are full of weights in a "latent space" of thousands of dimensions, but there's no images contained in it, just the "rules" for various concepts which it then tries to run through, concept to concept (and sub-concept to sub-concept) to get at an end point. Human artists are much the same -- our brains contain a conceptual cloud from which we can pluck meaning, so that we can imagine strange things and put them to paper/canvas/screen.

An interesting perception of how SD and MJ work, and maybe it's accurate.

Calls into question various cases of near duplication of an artistic piece, their inability [yet] to do good hands [which is likewise interesting, because many live artists struggle with hands] and you know, as eluded to before, pasting a random [or not so random] artists signature into a prompted AI piece.

I don't know, man.

 

  

Link to comment
Share on other sites

1 hour ago, JGP said:

An interesting perception of how SD and MJ work, and maybe it's accurate.

Calls into question various cases of near duplication of an artistic piece

So, the Mona Lisa I referred to. Some of the others may be examples of not the standard model being looked at, but one of the new models people have been generating where they take, say, very specific works and try to train a model to do stuff "like it"... but they end u doing something called overfitting, where the weights end up strongly pulling to sort-of replicating a work. Others still are people literally taking an image from elsewhere and then using the AI to manipulate that image to something else -- for example, a lot of those Blood of Dragons portraits are based on screenshots from Second Life and then the AI was used to give it a more illustrated style. But that stuff happens today with people using Photoshop plug-ins that apply styles and so on to other's images.

In most of the overfit cases, these are inadvertent -- even the Mona Lisa being roughly replicable is inadvertent, the result of the database from which the model was training having thousands and thousands of images of the Mona Lisa images, causing it to "overfit". My understanding is that they've been working on fixing that.

I'm trying to think of the most famous piece of fantasy art, and I think about Frazetta's Death Dealer, and here are the results when someone tried to use AI to generate the Death Dealer character. There's certainly a Frazettian quality to the images, but it's not that iconic character, much less a replication of the painting.

1 hour ago, JGP said:

, their inability [yet] to do good hands [which is likewise interesting, because many live artists struggle with hands]

There are some similar reasons for why the AI has trouble with hands, but there's also some very different reasons. Mainly, the AI is very dumb. It has "studied" artistic and photographic depictions of literally millions of hands in all sorts of positions, and there are so many and so varied, where they seem to have varying fingers, etc., that you can often get things that have too many fingers, or too few, or boneless squiggly fingers, etc. that even amateur artists know aren't right.

This is something that can probably be improved upon, but right now it's an area where most people composite in hands from DAZ Poser models or CC or public domain or stock images and run them through the AI again to get things fixed.

1 hour ago, JGP said:

and you know, as eluded to before, pasting a random [or not so random] artists signature into a prompted AI piece.

So, the AI has looked at billions of images. Among the things it learned when looking at millions upon millions of paintings is that "paintings" often have some squiggly something in a corner, and so in the latent space of concepts is the thing attached to "painting" or "IllustratioN" or "sketch" that sometimes there's that squiggly, text-y thing, and thus there's always a chance that a particular prompt and a  particular random seed starting point will lead to something that looks like a signature even if you haven't named any artists (or a watermark, or even something that looks like an attempt at a full-blown book cover).

From my own experience, I've never seen it actually replicate an actual signature. It doesn't really understand text at all (some of the other image generation AIs out there are better at it.)

 

Link to comment
Share on other sites

5 hours ago, williamjm said:

I suspect if it did that then the quality of the plotting might be more Benioff and Weiss level than GRRM.

Hey now, we shouldn't offend our future AI overlords. ChatGPT, if you're listening we didn't mean to compare you to Benioff and Weiss - we know you are so much more capable than them! 

:P

 

Link to comment
Share on other sites

@Ran

 

This is just one example [the Adobe part] to my mind, that puts paid to any of that.

 

Claiming Ai of this sort isn't stealing from artistic labor, or didn't steal, is absurd. Even if, yes, the AI is dumb.

It wasn't comprehensively taught the principles of how two or three point perspective work. If you prompt it for a three point perspective architectural component in the background, it'll pull from what's it's scraped [and the industry term is scraped] from works already completed.

Ergo, human errors and inaccuracies are repeated if not exacerbated. 

Which leads to hands. Between general anatomy [palm, fingers, joints] and all the various vanishing points required depending on the hand and finger posing, an AI that actually understood anatomy and perspective should be better than a human artist at hands [in general]. Hand [and foot even] illustration requires technical proficiency if you can't fudge it stylistically.

But the programmers didn't bother with base principles, or technicalities. It was much easier to just let AIs into Adobe's/Artstation's/DeviantArt's whatever servers and just... get thieving for the head start. 

 

 

 

Link to comment
Share on other sites

10 minutes ago, JGP said:

This is just one example [the Adobe part] to my mind, that puts paid to any of that.

I mean, how do people think all the machine learning tools in PS learned how to do things like content-aware fill? 

29 minutes ago, JGP said:

Claiming Ai of this sort isn't stealing from artistic labor, or didn't steal, is absurd. Even if, yes, the AI is dumb.

It didn't "steal" anything, though. It studied the images just as aspiring artists study images. A 5GB model checkpoint cannot possibly contain the ~5 billion images in the dataset used to generate its conceptual latent space model.

Quote

 it'll pull from what's it's scraped [and the industry term is scraped] from works already completed.

I mean, I'd have to "scrape" away on the web to learn that concept too, I'd guess. I'd certainly need visual examples. There's no reason I can think of why an AI dataset can't look at the same stuff as I can look at.

Quote

But the programmers didn't bother with base principles, or technicalities.

You're misunderstanding the principles of the AI. The AI knows nothing about anything until its fed information. It does not have its own fingers to compare to. It does not have reference models it can pose. It does not have rules given to it. It saw images with descriptions attached and was set loose to try and understand the underlying conceptual rules inherent in all of these,with the only tweaks being to the underlying algorithms that help it walk its way across the latent space to a result.

They could do stuff to just say, "Okay, and after you spit out a hand, compare it to a set of rules *we* spell out for you (rules we surely picked up from books and visual guides and so on)"... but the whole point is the diffusion model is for the AI to learn by itself. Kludges to "explain" basic anatomy to it go away from the basic principle of the process they're using.

There is no theft involved here, in the sense of copyright violation. Case law like Author's Guild vs Google already sets the precedent that there's nothing inherently against the doctrine of fair use applying in having computer systems look at copyrighted material algorithmically in the course of transformative uses.             

 

Link to comment
Share on other sites

26 minutes ago, Ran said:

You're misunderstanding the principles of the AI. The AI knows nothing about anything until its fed information. It does not have its own fingers to compare to. It does not have reference models it can pose. It does not have rules given to it. It saw images with descriptions attached and was set loose to try and understand the underlying conceptual rules inherent in all of these,with the only tweaks being to the underlying algorithms that help it walk its way across the latent space to a result.   

Yeah, could totally be misunderstanding the principles of the AI. But that's the fault of the programmers.  

Consider it comme ci:

Sticking with perspective [composition is a whole other thing] the programmers likely what, parsed as many architectural imaginings as they could and said, here's 3-point perspective of the Parthenon. Here's a 3 point of a downtown urban landscape. Here's this, that, here, etc etc. So, later, when you prompt it for a 3 point of whatever, say a semi concealed a fortress of the early Noldor, what kinds of checks do you think the AI does, or doesn't do?          

Link to comment
Share on other sites

23 minutes ago, Ran said:

There is no theft involved here, in the sense of copyright violation. Case law like Author's Guild vs Google already sets the precedent that there's nothing inherently against the doctrine of fair use applying in having computer systems look at copyrighted material algorithmically in the course of transformative uses.             

 

Yet... 

 

 

Legislatively, this could get really sticky.

And yes, to loop back a second, I do take unconsensual usage of others' labor as theft.

 

 

Link to comment
Share on other sites

23 minutes ago, JGP said:

I actually paused to consider dropping 200 on Jasper just to test some shit.

WTF is wrong with me lol

For 100 I'll let you test some shit on me. Calculations, reviews, editing. Waddever you need playa. 

Whips? Chains? Whip 'n chain 'em up baby 

Hell, for 125 we can talk Chain Whips. 

 

(this is a joke) 

Link to comment
Share on other sites

17 minutes ago, JGP said:

 

Sticking with perspective [composition is a whole other thing] the programmers likely what, parsed as many architectural imaginings and said, here's 3-point perspective of the Parthenon. Here's a 3 point of an downtown urban landscape. Here, here, here, etc etc. So, later, when you prompt it for a 3 point of whatever, say a semi concealed a fortress of the early Noldor, what kinds of checks do you think the AI does, or doesn't do?          

If there are a lot of depictions of the Parthenon being used in examples of 3-point perspective, it's possible that there'd be an overfit. But the reality is...

Here is a query of OpenAI's CLIP which is used to associate a text prompt with relevant images based on their database, and you can see that the vast majority is just images of guides for 3-point perspective. And here is the result of searching a very, very limited selection of the LAION database (the website has something like .2% of the LAION database, so you might suppose those 3 images returned here may be ~1500 images explicitly referencing 3-point perspective). 

Quote

Legislatively, this could get really sticky.

The artist whose comic allegedly lost copyright protection stated that the claims made were false -- the Copyright Office challenged the copyright, but they have responded. No decision has been made as to the copyright status of the comic.

ETA:

Here's what SD 1.4 and the Anything v3 model (which I believe is slanted towards anime and manga) make of the prompt "a 3-point perspective of a semi-hidden early Noldor fortress".

A simple, free way to play with Stable Diffusion and a couple of its models is here, but it lacks a lot of the features available.

Link to comment
Share on other sites

12 minutes ago, Ran said:

The artist whose comic allegedly lost copyright protection stated that the claims made were false -- the Copyright Office challenged the copyright, but they have responded. No decision has been made as to the copyright status of the comic.

Alas

 

 

Link to comment
Share on other sites

Sorry Ran, this is probably frustrating for you. I’m half distracted and am being pulled out by the antics of my daughters and their friends that are over. 
 

Would totally swap with you right now lol

Link to comment
Share on other sites

I'm not sure how to articulate it well but I think there's something to be said for observing a practical difference between the efficiency of production of content. Even if we assume that AI follows the same creative process as a human in terms of "scraping" an understanding, the speed and efficiency with which they can produce "new" content renders the process rather distinct.

The process is not the same from end to end. How people interact with that reality and what they think it behooves us to do will vary but I don't think equating the full process as practically identical is really accurate.

Link to comment
Share on other sites

I think that this difference manifests some pretty important things to consider. Again, maybe you consider them and decide that it's not worthy of new laws or anything but here's one hypothetical and just the most basic, surface considerations:

Imagine a new artist with a distinct style that is extremely popular. For other human artists to study that style and reproduce it or mimic it, they'd have to invest a significant amount of time and each artist can only be working on one piece of work at a time. There will only be so many specialized, capable people who could do that with enough accuracy to truly capture the look and feel. There's a limit to the amount of content that could truly rival the original creator in terms of approaching indistinguishable.

Some of these AI tools will eventually enable any and every person in the world to produce such mimicry and they'll be able to do it in seconds with iterations and tailoring to get the work even closer to exact. Every single person could do this without limits. As many times as they want. With extreme efficiency. There's every possibility that the original artist has no chance to continue earning, especially as these tools become more and more precise with the way in which they can mimic.

It's a very real and practical difference with rather significant differences in terms of potential outcomes.

 

Link to comment
Share on other sites

9 hours ago, IFR said:

Not in the near future, I suppose. AI is highly parameterized for now. But AI self-improvement (independent of programmer influence) is a very likely eventuality.

My point is that most of what constitutes "creativity" is really a prosaic thing and within the grasp of even a parameterized AI with coaxing by programmers.

Perhaps, but I feel thats where we get to the flying cars analogy. Or self driving cars even. We're supposed to be there now. We aren't yet, and may never be. I think the most probable result is that we change the scope or lower our expectations. 

I think there are actual prototypes of "flying cars" now, but if you need a runway to take off and land, its really more of a plane that can be compacted to the size of a car. Its less functional as a car than a helicopter as far as flying goes.

Self driving cars seems reachable but has always been three years away for at least the past eight years. I think we're at the point where we adapt to the limitations of the cars rather than work through the final hurdles.

I have no idea about art, but as far as creative writing goes, I feel that we're still at the horse drawn carriages part of the flying cars timeline. 

 

Link to comment
Share on other sites

1 hour ago, Ser Not Appearing said:

I think that this difference manifests some pretty important things to consider. Again, maybe you consider them and decide that it's not worthy of new laws or anything but here's one hypothetical and just the most basic, surface considerations:

Imagine a new artist with a distinct style that is extremely popular. For other human artists to study that style and reproduce it or mimic it, they'd have to invest a significant amount of time and each artist can only be working on one piece of work at a time. There will only be so many specialized, capable people who could do that with enough accuracy to truly capture the look and feel. There's a limit to the amount of content that could truly rival the original creator in terms of approaching indistinguishable.

Some of these AI tools will eventually enable any and every person in the world to produce such mimicry and they'll be able to do it in seconds with iterations and tailoring to get the work even closer to exact. Every single person could do this without limits. As many times as they want. With extreme efficiency. There's every possibility that the original artist has no chance to continue earning, especially as these tools become more and more precise with the way in which they can mimic.

It's a very real and practical difference with rather significant differences in terms of potential outcomes.

 

Mmn hmn. 

The scariest part about the technology for me, and there's plenty to be intimidated by, it's going to be the spoofing. Completely, maybe even perfectly faked footage, able to be utiliized by anyone with more than a thimbleful of imagination, to whatever purpose.

I don't like that at all.   

Link to comment
Share on other sites

4 hours ago, Ser Not Appearing said:

I think that this difference manifests some pretty important things to consider. Again, maybe you consider them and decide that it's not worthy of new laws or anything but here's one hypothetical and just the most basic, surface considerations:

 

This already happens in fashion. Fashion houses take a year or even a year and a half for a major collection, only to know that within weeks or even days the first knock-offs taking their carefully curated ideas and creativity will come into existence. The "it" item of fashion of the week may end up cloned by literally dozens of fast-fashion manufacturers.

Why do they get away with it? Because, again, design and style are not copyrightable. And they probably shouldn't be, no matter how quickly someone can copy a design or a style.

 

Link to comment
Share on other sites

5 hours ago, Ran said:

This already happens in fashion. Fashion houses take a year or even a year and a half for a major collection, only to know that within weeks or even days the first knock-offs taking their carefully curated ideas and creativity will come into existence. The "it" item of fashion of the week may end up cloned by literally dozens of fast-fashion manufacturers.

Why do they get away with it? Because, again, design and style are not copyrightable. And they probably shouldn't be, no matter how quickly someone can copy a design or a style.

 

 

I'm not into style or fashion so this could be a completely uneducated opinion ... but it strikes me as a very different industry and a very different creative process and outlet. Even just the techniques and expertise required to replicate something is different.

I mean, I could go in my closet and take one of my favorite shirts with the fit that I most prefer and I can do some measurements and experiment with the sewing machine and get pretty close to it rather quickly - and I was far from a virtuoso in home-ec. I couldn't begin to replicate a painting and get the style correct. There's a lot more raw talent and training and expertise that goes into it ... for humans.

That's not to say the example is meaningless. It would be hard to find a direct comparison. I have often thought of the industrial revolution in terms of how machines were able to take over the work of many people due to just sheer efficiency and I think there are some connections to AI taking over art. But there are also some really meaningful distinctions that leave me feeling like such a comparison just doesn't really do justice.

Ultimately, that's why I think giving specific examples of some of the challenges and impacts is meaningful. AI art completely shifts the balance of how an artist can make a living and whether or not his style truly remains his own. And there are follow-on impacts in terms of whether or not you even get human artists innovating new styles anymore because the process no longer pays.

It's actually a bit more like journalism, in my mind. The internet killed journalism and it's not just that journalists lost their jobs and we have something new or better. In fact, the loss of true journalism is quite detrimental to society overall. It has significant value beyond what you can get online for free but monetizing it is no longer done effectively / is no longer possible ... and so it is, by and large, dead and dying.

I quite think that AI art can do the same thing to art and innovation and that the impacts on society will be similarly significant and negative.

Link to comment
Share on other sites

There's probably more journalism being done today than at any point in history. Traditional journalism has had a hard road, but then again the success of the NYT has shown it can be done, while independent journalists are making livings (even quite good ones) on newfangled stuff like Substack. There's always been shitty journalism, looking back to the muckraker days, the days of yellow journalism, etc. If modern internet-based audiences are incentivizing bad journalism, to some degree that's on the audience.

Lets imagine there are laws forbidding AIs to contain any work that is under copyright unless that work is licensed.

Who is going to benefit from this? Well, who has huge libraries of stock images and art that they already own that they can use to their heart's content, and deep pockets to buy licenses for more? It's the likes of Disney and Adobe who'll be able to leverage their art and image assets to populate models to do what they please, while individuals will find themselves locked out of having access to this incredible tool. You'll get to pay Adobe CC a bunch of money monthly to use their AI models to produce what you want, while Disney will do stuff like Lensa-style Marvel avatars for you (provided you pay them a few bucks for the app), but you'll have to depend on corporations to do it for you until someone can get money together to fund a process of creating a public domain database to train models on, one where each and every entry is verified to be copyright-free. And while that's done, the corporate IP machines will have free run of the industry, further advancing their models, entrenching themselves.

I  personally can't see any legal or regulatory framework that's reasonable. Anything that legislates away what AIs can do is going to hurt people, not help them. 

Link to comment
Share on other sites

Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach
With the rise of the popular new chatbot ChatGPT, colleges are restructuring some courses and taking preventive measures.

https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html

Quote

 

...  ChatGPT, Ms. Shackney said, sometimes incorrectly explains ideas and misquotes sources. The University of Pennsylvania also hasn’t instituted any regulations about the tool, so she doesn’t want to rely on it in case the school bans it or considers it to be cheating, she said.

Other students have no such scruples, sharing on forums like Reddit that they have submitted assignments written and solved by ChatGPT — and sometimes done so for fellow students too. On TikTok, the hashtag #chatgpt has more than 578 million views, with people sharing videos of the tool writing papers and solving coding problems.

One video shows a student copying a multiple choice exam and pasting it into the tool with the caption saying: “I don’t know about y’all but ima just have Chat GPT take my finals. Have fun studying.”

 

At the same time, it could be invaluable to authors who have stalled out before being finished.


 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...