Jump to content

Star Wars: a story for every fan? (Andor Spoilers)


Ser Scot A Ellison
 Share

Recommended Posts

Two episodes for The Bad Batch today and they are good ones. Seeing more of the Senate and the imperial machinations to get rid of the clones was great. 

Spoiler

Liked Palpatine scapegoating the middle-management antagonist that was Rampart.

But Crosshair was missed. I hope we get to see the consequences of these episode from his perspective.

Edited by Corvinus85
Link to comment
Share on other sites

They're doing an interesting job balancing some of that dark, gritty feel of where the Empire is during Andor and eventually ANH... and remembering that this is still a cartoon that children are likely to be watching...

Link to comment
Share on other sites

I know I'm not the target audience for this show, but most episodes of Bad Batch are just meh. Clone Force 99 are too childish, and since it's a kid's show, I accept that. But episode 7 (Clone Conspiracy) is the exception (haven't watched episode 8 yet). Very engaging episode.

Episode 7 reminds me of the Mando episode of BoBF. Subpar show with superb "cameo" episode. Off to watch Truth and Consequences now.

Link to comment
Share on other sites

30 minutes ago, Ran said:

Jesus christ. Vocal AI is becoming aaaaamazingly good.

Really is.  Other than one part where it seemed to jump from Guiness to MacGregor somehow, it was very smooth.

Link to comment
Share on other sites

7 hours ago, DaveSumm said:

 

Between this and ChatGPT it's getting worrying for writers. I never thought I was in danger of machines taking over my job.

Edited by Darryk
Link to comment
Share on other sites

1 hour ago, Darryk said:

Between this and ChatGPT it's getting worrying for writers. I never thought I was in danger of machines taking over my job.

ChatGPT is still kind of ... so-so. It can very confidently give you bullshit answers that are clearly and verifiably wrong. And the way these AIs work, there's a degree of "black box" to how they come up with specific outputs, they're too complicated to really be traced from the outside, so basically all they can really do about that is try and train it more and come up with algorithmic guard rails to try and keep it to the truth rather than to truthiness.

Of course, constant work is going into these things, so who knows where it'll end up. 

 

 

Link to comment
Share on other sites

19 minutes ago, Ran said:

ChatGPT is still kind of ... so-so. It can very confidently give you bullshit answers that are clearly and verifiably wrong. And the way these AIs work, there's a degree of "black box" to how they come up with specific outputs, they're too complicated to really be traced from the outside, so basically all they can really do about that is try and train it more and come up with algorithmic guard rails to try and keep it to the truth rather than to truthiness.

Of course, constant work is going into these things, so who knows where it'll end up. 

 

 

ChatGDP is really more of a language synthesiser than a truth machine, from what I understand. It’s very good at understanding speech patterns, but has no idea what it’s talking about.

Link to comment
Share on other sites

1 hour ago, Heartofice said:

ChatGDP is really more of a language synthesiser than a truth machine, from what I understand. It’s very good at understanding speech patterns, but has no idea what it’s talking about.

That's a little harsh. I'd say like 70% of the time or more it'll give an acceptable answer. Sometimes it just gets confused and starts spewing nonsense, but not more often than your average human in my experience. 

Link to comment
Share on other sites

31 minutes ago, RumHam said:

That's a little harsh. I'd say like 70% of the time or more it'll give an acceptable answer. Sometimes it just gets confused and starts spewing nonsense, but not more often than your average human in my experience. 

I’m not sure why it isn’t just honest when it doesn’t know. It seems to have been programmed to just make up shit if it strays from some data it’s been fed.

Anyway … didn’t imagine the tangent, I’m just always down for anything that makes fun of the complete shit show of a plot that is the prequels. I like how Obi Wan covers for Jango, “we knew he had too much pride to just shoot her from long range”… 

Link to comment
Share on other sites

4 minutes ago, DaveSumm said:

I’m not sure why it isn’t just honest when it doesn’t know. It seems to have been programmed to just make up shit if it strays from some data it’s been fed.

Anyway … didn’t imagine the tangent, I’m just always down for anything that makes fun of the complete shit show of a plot that is the prequels. I like how Obi Wan covers for Jango, “we knew he had too much pride to just shoot her from long range”… 

Because it gets its information from the internet, where there is someone stating some bullshit as actual fact about everything. It can't do a good job of separating opinion from fact from outright lies. The more people agree on something the better it'll be on that subject, but it literally can't tell if it shouldn't know about a subject.

It isn't making things up. It's just repeating pieces of things it already has seen.

Link to comment
Share on other sites

54 minutes ago, RumHam said:

That's a little harsh. I'd say like 70% of the time or more it'll give an acceptable answer. Sometimes it just gets confused and starts spewing nonsense, but not more often than your average human in my experience. 

What I mean is that it literally does not understand what it’s talking about. It knows where to get the info and almost how to answer the question you asked it, but it’s not like it’s actually comprehending the concepts behind anything It’s talking about. 

Edited by Heartofice
Link to comment
Share on other sites

From ChatGPT:

Quote

In addition to their work on westeros.org, Garcia and Antonsson have written several books about the world of Westeros, including "The World of Ice and Fire" and "The Official A Song of Ice and Fire Coloring Book." These books offer a deeper look at the world of Westeros and its characters, and they provide fans with a wealth of new information and insights into Martin's complex and richly imagined universe.

Uh....

I wonder how it even makes that mistake? I feel like maybe our name got associated to the title at some bookseller by mistake and it "learned" that incorrect factoid from that.

Link to comment
Share on other sites

7 minutes ago, Ran said:

From ChatGPT:

Uh....

I wonder how it even makes that mistake? I feel like maybe our name got associated to the title at some bookseller by mistake and it "learned" that incorrect factoid from that.

Maybe google your names + the book title and see what comes up? 

Has anyone tried feeding it the novels and having it try to finish Winds of Winter?

Link to comment
Share on other sites

13 minutes ago, Ran said:

From ChatGPT:

Uh....

I wonder how it even makes that mistake? I feel like maybe our name got associated to the title at some bookseller by mistake and it "learned" that incorrect factoid from that.

I finished that, it required a lot of red crayon

Link to comment
Share on other sites

6 minutes ago, RumHam said:

Maybe google your names + the book title and see what comes up? 

Nothing obvious, but maybe the fact that TWoIaF gets mentioned on the Coloring Book pages led it to assume that we were invovled, even when (on Amazon, Google Books, etc.) the coloring book lists only GRRM as "author".

Link to comment
Share on other sites

25 minutes ago, Ran said:

I wonder how it even makes that mistake?

I listened to the Sean Carroll AMA podcast where someone asked ChatGPT (to pretend to be Sean) how he met his wife, and it made up a whole story of them meeting as grad students (they weren’t) in Chicago (where neither of them went to university). It also said his favourite pizza was margherita, even pulling a ‘quote’ about why he liked it. All completely made up.

Maybe it was the ‘pretend to be Sean’ part, as those are obviously things Sean should know. But it still feels like it just wings it pretty hard when it doesn’t know something.

(Did we have a thread for this? I couldn’t find it…)

Link to comment
Share on other sites

6 minutes ago, DaveSumm said:

I listened to the Sean Carroll AMA podcast where someone asked ChatGPT (to pretend to be Sean) how he met his wife, and it made up a whole story of them meeting as grad students (they weren’t) in Chicago (where neither of them went to university). It also said his favourite pizza was margherita, even pulling a ‘quote’ about why he liked it. All completely made up.

Maybe it was the ‘pretend to be Sean’ part, as those are obviously things Sean should know. But it still feels like it just wings it pretty hard when it doesn’t know something.

(Did we have a thread for this? I couldn’t find it…)

I think it’s because it understands the question and the sort of answers usually given for it, that’s what so clever about it, it’s a conversation replicator. I think someone described it as a super clever Auto Complete.. it’s not Skynet even if it looks like it 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share

×
×
  • Create New...