Corvinus85 Posted February 8 Share Posted February 8 (edited) Two episodes for The Bad Batch today and they are good ones. Seeing more of the Senate and the imperial machinations to get rid of the clones was great. Spoiler Liked Palpatine scapegoating the middle-management antagonist that was Rampart. But Crosshair was missed. I hope we get to see the consequences of these episode from his perspective. Edited February 8 by Corvinus85 Link to comment Share on other sites More sharing options...
Jaxom 1974 Posted February 9 Share Posted February 9 They're doing an interesting job balancing some of that dark, gritty feel of where the Empire is during Andor and eventually ANH... and remembering that this is still a cartoon that children are likely to be watching... Link to comment Share on other sites More sharing options...
Myrddin Posted February 10 Share Posted February 10 I know I'm not the target audience for this show, but most episodes of Bad Batch are just meh. Clone Force 99 are too childish, and since it's a kid's show, I accept that. But episode 7 (Clone Conspiracy) is the exception (haven't watched episode 8 yet). Very engaging episode. Episode 7 reminds me of the Mando episode of BoBF. Subpar show with superb "cameo" episode. Off to watch Truth and Consequences now. Link to comment Share on other sites More sharing options...
DaveSumm Posted February 11 Share Posted February 11 Rhom, Corvinus85 and Maia 1 2 Link to comment Share on other sites More sharing options...
Ran Posted February 11 Share Posted February 11 Jesus christ. Vocal AI is becoming aaaaamazingly good. Link to comment Share on other sites More sharing options...
Rhom Posted February 11 Share Posted February 11 30 minutes ago, Ran said: Jesus christ. Vocal AI is becoming aaaaamazingly good. Really is. Other than one part where it seemed to jump from Guiness to MacGregor somehow, it was very smooth. Link to comment Share on other sites More sharing options...
Darryk Posted February 11 Share Posted February 11 (edited) 7 hours ago, DaveSumm said: Between this and ChatGPT it's getting worrying for writers. I never thought I was in danger of machines taking over my job. Edited February 11 by Darryk Link to comment Share on other sites More sharing options...
Ran Posted February 11 Share Posted February 11 1 hour ago, Darryk said: Between this and ChatGPT it's getting worrying for writers. I never thought I was in danger of machines taking over my job. ChatGPT is still kind of ... so-so. It can very confidently give you bullshit answers that are clearly and verifiably wrong. And the way these AIs work, there's a degree of "black box" to how they come up with specific outputs, they're too complicated to really be traced from the outside, so basically all they can really do about that is try and train it more and come up with algorithmic guard rails to try and keep it to the truth rather than to truthiness. Of course, constant work is going into these things, so who knows where it'll end up. Darryk 1 Link to comment Share on other sites More sharing options...
Heartofice Posted February 11 Share Posted February 11 19 minutes ago, Ran said: ChatGPT is still kind of ... so-so. It can very confidently give you bullshit answers that are clearly and verifiably wrong. And the way these AIs work, there's a degree of "black box" to how they come up with specific outputs, they're too complicated to really be traced from the outside, so basically all they can really do about that is try and train it more and come up with algorithmic guard rails to try and keep it to the truth rather than to truthiness. Of course, constant work is going into these things, so who knows where it'll end up. ChatGDP is really more of a language synthesiser than a truth machine, from what I understand. It’s very good at understanding speech patterns, but has no idea what it’s talking about. Link to comment Share on other sites More sharing options...
RumHam Posted February 11 Share Posted February 11 1 hour ago, Heartofice said: ChatGDP is really more of a language synthesiser than a truth machine, from what I understand. It’s very good at understanding speech patterns, but has no idea what it’s talking about. That's a little harsh. I'd say like 70% of the time or more it'll give an acceptable answer. Sometimes it just gets confused and starts spewing nonsense, but not more often than your average human in my experience. Link to comment Share on other sites More sharing options...
Kalnak the Magnificent Posted February 11 Share Posted February 11 Chatgpt is basically what would happen if this forum was the source of all truth everywhere. Week, Myrddin and Spockydog 3 Link to comment Share on other sites More sharing options...
DaveSumm Posted February 11 Share Posted February 11 31 minutes ago, RumHam said: That's a little harsh. I'd say like 70% of the time or more it'll give an acceptable answer. Sometimes it just gets confused and starts spewing nonsense, but not more often than your average human in my experience. I’m not sure why it isn’t just honest when it doesn’t know. It seems to have been programmed to just make up shit if it strays from some data it’s been fed. Anyway … didn’t imagine the tangent, I’m just always down for anything that makes fun of the complete shit show of a plot that is the prequels. I like how Obi Wan covers for Jango, “we knew he had too much pride to just shoot her from long range”… Link to comment Share on other sites More sharing options...
Kalnak the Magnificent Posted February 11 Share Posted February 11 4 minutes ago, DaveSumm said: I’m not sure why it isn’t just honest when it doesn’t know. It seems to have been programmed to just make up shit if it strays from some data it’s been fed. Anyway … didn’t imagine the tangent, I’m just always down for anything that makes fun of the complete shit show of a plot that is the prequels. I like how Obi Wan covers for Jango, “we knew he had too much pride to just shoot her from long range”… Because it gets its information from the internet, where there is someone stating some bullshit as actual fact about everything. It can't do a good job of separating opinion from fact from outright lies. The more people agree on something the better it'll be on that subject, but it literally can't tell if it shouldn't know about a subject. It isn't making things up. It's just repeating pieces of things it already has seen. Link to comment Share on other sites More sharing options...
Heartofice Posted February 11 Share Posted February 11 (edited) 54 minutes ago, RumHam said: That's a little harsh. I'd say like 70% of the time or more it'll give an acceptable answer. Sometimes it just gets confused and starts spewing nonsense, but not more often than your average human in my experience. What I mean is that it literally does not understand what it’s talking about. It knows where to get the info and almost how to answer the question you asked it, but it’s not like it’s actually comprehending the concepts behind anything It’s talking about. Edited February 11 by Heartofice RumHam 1 Link to comment Share on other sites More sharing options...
Ran Posted February 11 Share Posted February 11 From ChatGPT: Quote In addition to their work on westeros.org, Garcia and Antonsson have written several books about the world of Westeros, including "The World of Ice and Fire" and "The Official A Song of Ice and Fire Coloring Book." These books offer a deeper look at the world of Westeros and its characters, and they provide fans with a wealth of new information and insights into Martin's complex and richly imagined universe. Uh.... I wonder how it even makes that mistake? I feel like maybe our name got associated to the title at some bookseller by mistake and it "learned" that incorrect factoid from that. williamjm and SpaceChampion 2 Link to comment Share on other sites More sharing options...
RumHam Posted February 11 Share Posted February 11 7 minutes ago, Ran said: From ChatGPT: Uh.... I wonder how it even makes that mistake? I feel like maybe our name got associated to the title at some bookseller by mistake and it "learned" that incorrect factoid from that. Maybe google your names + the book title and see what comes up? Has anyone tried feeding it the novels and having it try to finish Winds of Winter? Rhom 1 Link to comment Share on other sites More sharing options...
Kalnak the Magnificent Posted February 11 Share Posted February 11 13 minutes ago, Ran said: From ChatGPT: Uh.... I wonder how it even makes that mistake? I feel like maybe our name got associated to the title at some bookseller by mistake and it "learned" that incorrect factoid from that. I finished that, it required a lot of red crayon Rhom and RumHam 2 Link to comment Share on other sites More sharing options...
Ran Posted February 11 Share Posted February 11 6 minutes ago, RumHam said: Maybe google your names + the book title and see what comes up? Nothing obvious, but maybe the fact that TWoIaF gets mentioned on the Coloring Book pages led it to assume that we were invovled, even when (on Amazon, Google Books, etc.) the coloring book lists only GRRM as "author". Link to comment Share on other sites More sharing options...
DaveSumm Posted February 11 Share Posted February 11 25 minutes ago, Ran said: I wonder how it even makes that mistake? I listened to the Sean Carroll AMA podcast where someone asked ChatGPT (to pretend to be Sean) how he met his wife, and it made up a whole story of them meeting as grad students (they weren’t) in Chicago (where neither of them went to university). It also said his favourite pizza was margherita, even pulling a ‘quote’ about why he liked it. All completely made up. Maybe it was the ‘pretend to be Sean’ part, as those are obviously things Sean should know. But it still feels like it just wings it pretty hard when it doesn’t know something. (Did we have a thread for this? I couldn’t find it…) Link to comment Share on other sites More sharing options...
Heartofice Posted February 11 Share Posted February 11 6 minutes ago, DaveSumm said: I listened to the Sean Carroll AMA podcast where someone asked ChatGPT (to pretend to be Sean) how he met his wife, and it made up a whole story of them meeting as grad students (they weren’t) in Chicago (where neither of them went to university). It also said his favourite pizza was margherita, even pulling a ‘quote’ about why he liked it. All completely made up. Maybe it was the ‘pretend to be Sean’ part, as those are obviously things Sean should know. But it still feels like it just wings it pretty hard when it doesn’t know something. (Did we have a thread for this? I couldn’t find it…) I think it’s because it understands the question and the sort of answers usually given for it, that’s what so clever about it, it’s a conversation replicator. I think someone described it as a super clever Auto Complete.. it’s not Skynet even if it looks like it Link to comment Share on other sites More sharing options...
Recommended Posts