Jump to content

Artificial Intelligence


fionwe1987
 Share

Recommended Posts

3 hours ago, Kalbear said:

So the short answer is yes. When we have done experiments with various AI models some of the things they do are pretty remarkable. They will do things like functionally invent new languages or language uses for talking with each other and sometimes specific language meanings to talk with specific other AI units. 

Interesting shit I was entirely unaware of.

 

3 hours ago, Kalbear said:

That said - all of the things you mentioned are very ape-spevific. Cliques, in groups- those are things we see in all social mammals and are derived from specific moral centers in the brain, are universal to apes and humans and can be turned off or broken with divergent brain types or damage or drugs. Being a snarky little asshat surely is not what defines intelligence?

Wasn't talking about intelligence, at all Kal, or outlier/loners? Insofar as cooperative behavior, not restricted to mammals though therein one might find its highest expression, ie: herds, packs, etc 

 

Edited to add because I was thinking while in a pvp match, could you explain your thinking on how a moral center is origin to group behavior?

Edited by JGP
Link to comment
Share on other sites

4 hours ago, Jace, Extat said:

So are you saying that the variance in persons amounts simply to differences in training data? Or are the differences in the platforms?

Both. And we have very good experimental data to back that up, both in humans with longitudinal studies, twin studies, cultural comparisons, MRIs of moral brain centers and in apes and other mammals. 

Which is also what we see with different AI models - both what they are and how they are trained significantly changes their exhibited behavior. 

4 hours ago, Ser Scot A Ellison said:

I’m saying there are certain behaviors I would expect of sentient/conscious beings.  If AI is engaging in these behaviors without our prompting (I do see the catch) I’m willing to re-evaluate whether “AI” as it exists today is conscious. 

And what are those behaviors you're expecting? 

Another one of the most important discoveries in human behavior is that almost never do we make a decision and then act on it; what we do is do something, and then rationalize it later. I linked one example of this but it's a very well-understood human phenomenon.

The idea that you just go and 'do' something without input is absurd on its face, too. You are getting massive amounts of input regularly, and are acting on impulses you probably aren't even aware of or that your brain actively hides from you

I guess the other question would be - as an outside observer, how would I be able to tell the difference between an AI that has no consciousness but looks like it does, and you? How could I tell that you have consciousness and you weren't just responding? 

Link to comment
Share on other sites

1 hour ago, JGP said:

Interesting shit I was entirely unaware of.

Yeah, it's pretty neat!
https://www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html

 

1 hour ago, JGP said:

Wasn't talking about intelligence, at all Kal, or outlier/loners? Insofar as cooperative behavior, not restricted to mammals though therein one might find its highest expression, ie: herds, packs, etc 

Edited to add because I was thinking while in a pvp match, could you explain your thinking on how a moral center is origin to group behavior?

So humans (and most apes that we've studied) have specific moral centers of their brain that are associated with core emotions. Note that these are centers, not a center; there are multiple ones. The emotions that all neurotypical humans and apes possess are:

  • joy
  • fear
  • disgust
  • surprise
  • anger
  • sadness

(yes, this is what Inside/Out had as well, which is because they modeled the whole thing based on real psychological and sociological study). These, in turn, correspond to specific universal morals:

  • ingroup
  • authority
  • purity
  • fairness
  • helping

And we can see those emotional centers light up in a brain when we see those kinds of moral choices engaged with as well. 

Now, how do these factor into social animals? I'll link my favoritest video of all time:

In order for a social animal to exist some intrinsic moral behavior has to exist for that society. You want to feel part of the tribe and get joy for doing that so you don't set out on your own and you want to be around others. You need to respect hierarchies and be worried about consequences of not listening to them. You need to cast out the other parts of the tribe that are dangerous to it when they are diseased. You need to want to help other parts of the tribe when they're hurt (and you in turn need them to help you when you need it). And, as the above, you need to fight back when members of the tribe treat you unfairly. 

Now, some of these also have value as individuals - anger is a useful one for dealing with life and death situations, purity is useful for dealing with things you shouldn't consume, fear is useful for being cautious. But ingroup and help/harm are pretty much social animals only (though pretty much all mammals have it to some extent, even ones that are largely solitary, probably to help with taking care of young).

Aaaand...it turns out what we talk about as sociopathy? They don't have virtually any of those specific social-only moral values. They have fear aplenty, they have purity aplenty, and they often have anger - but that ingroup and help/harm stuff? Nope, they don't get it at all. 

Anyway, sorry this was long and might not have been what you were looking for, but my tl;dr point here is that a whole lot of the behaviors about wanting to form ingroup behaviors or cliques and have specific words and lingo for your group? That's just monkey tribal stuff and shouldn't be considered particularly relevant for measuring intelligence of an AI. Probably a good thing; the last thing we want is AIs looking at each other as tribe members and humans as not. 

Link to comment
Share on other sites

53 minutes ago, Kalbear said:

Anyway, sorry this was long and might not have been what you were looking for, but my tl;dr point here is that a whole lot of the behaviors about wanting to form ingroup behaviors or cliques and have specific words and lingo for your group? That's just monkey tribal stuff and shouldn't be considered particularly relevant for measuring intelligence of an AI. Probably a good thing; the last thing we want is AIs looking at each other as tribe members and humans as not. 

Nah man, was an enlightening read.

I agree emotional responses and in-group/out-group aren't a good yardstick to plumb intelligence; what I'm disagreeing with is the idea [general] you might be considered conscious and/or sentient without them [amongst other things]    

Link to comment
Share on other sites

34 minutes ago, JGP said:

Nah man, was an enlightening read.

I agree emotional responses and in-group/out-group aren't a good yardstick to plumb intelligence; what I'm disagreeing with is the idea [general] you might be considered conscious and/or sentient without them [amongst other things]    

Ah, okay. 

And yeah, I disagree with that. I suspect an octopus would be very poor at ingroup behaviors and would not care at all about cooperation or helping/harm, but I know it scores quite highly at problem solving and other behaviors. Psychopaths often are quite intelligent and also entirely unable to form meaningful relationships that are real - or worse, only apes them to gain trust and value in a way that serves them best. 

Link to comment
Share on other sites

3 hours ago, Kalbear said:

Both. And we have very good experimental data to back that up, both in humans with longitudinal studies, twin studies, cultural comparisons, MRIs of moral brain centers and in apes and other mammals. 

Which is also what we see with different AI models - both what they are and how they are trained significantly changes their exhibited behavior. 

 

Very interesting. 

What about divergent outcomes? Do we know if the platform corrupts input or can input corrupt the platform? I would suggest that both need to be possible for true intelligence. If the platform is truly a thinking thing then it should be able to be altered unawares and alter itself actively. Does that make any sense? 

 

Where does madness originate? Why does God allow suffering? Where is my good bra? When is HotD coming back???

These are the things I need to know! 

3 hours ago, Kalbear said:

 

 

In order for a social animal to exist some intrinsic moral behavior has to exist for that society...

You need to respect hierarchies and be worried about consequences of not listening to them. You need to cast out the other parts of the tribe that are dangerous to it when they are diseased...

Wait a minute, are we talking about Jaces or monkeys right now?

Link to comment
Share on other sites

https://www.theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai#:~:text=The developer OpenAI has said,used to train their products

Quote

The developer OpenAI has said it would be impossible to create tools like its groundbreaking chatbot ChatGPT without access to copyrighted material, as pressure grows on artificial intelligence firms over the content used to train their products.

No shit, Sherlock.

Quote

“Because copyright today covers virtually every sort of human expression – including blogposts, photographs, forum posts, scraps of software code, and government documents – it would be impossible to train today’s leading AI models without using copyrighted materials,” said OpenAI in its submission, first reported by the Telegraph.

It added that limiting training materials to out-of-copyright books and drawings would produce inadequate AI systems: “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.”

So to meet the needs of today's citizens, and generate the large valuation and profit that you do, you need the copyrighted output of artists, journalists, scientists and laywers. Doesn't that prove that the value you generate depends on their output, and thus they should share a substantial portion of the profit you make?

 

Link to comment
Share on other sites

On 1/7/2024 at 12:44 AM, Ser Scot A Ellison said:

Isn’t that going to be taken up by the SCOTUS?

Yes, but if corporations are people, I shudder to think what this SCOTUS will say about intelligences trained by corporations. 

Link to comment
Share on other sites

2 hours ago, Kalbear said:

I can't be a millionaire without being able to rob a bank so I should be legally permitted to do that

Also, if I'm not allowed to rob banks, I can only produce counterfeit notes, which do not meet the needs of today’s citizens. 

Edited by fionwe1987
Link to comment
Share on other sites

2 hours ago, fionwe1987 said:

https://www.theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai#:~:text=The developer OpenAI has said,used to train their products

No shit, Sherlock.

So to meet the needs of today's citizens, and generate the large valuation and profit that you do, you need the copyrighted output of artists, journalists, scientists and laywers. Doesn't that prove that the value you generate depends on their output, and thus they should share a substantial portion of the profit you make?

 

I agree that this is just about the silliest argument I have come across lately.  There are hundreds of thousands of works out of copyright and in the public domain available to use if needed. 

Link to comment
Share on other sites

16 minutes ago, maarsen said:

I agree that this is just about the silliest argument I have come across lately.  There are hundreds of thousands of works out of copyright and in the public domain available to use if needed. 

They're not wrong that such LLM's would be less impressive. But the reason for that is that a lot of the impressiveness of LLMs seems to be tied to the size of their training dataset, and the fact that, as the NYT case is showing, and the IEEE article I linked showed, some of this training data leaks into the responses, in a way that is not easily resolved.

But that's all the more reason to share the profits, as far as I'm concerned. Or, build LLM architecture such that it is better able to extract intelligence from public domain works. But that's easier said than done, and its so much easier to violate copyright and make money now, instead! 

Link to comment
Share on other sites

Speaking from the science side, many companies have enormous amounts of internal data that the public never sees, and there is no way that data will find its way to some open-source AI. Its still a new field, but you might find insights from proprietary AI slowly find its way into the patent literature. I am particularly thinking of pharma companies and new molecules that may be designed.

Link to comment
Share on other sites

2 hours ago, fionwe1987 said:

They're not wrong that such LLM's would be less impressive. But the reason for that is that a lot of the impressiveness of LLMs seems to be tied to the size of their training dataset, and the fact that, as the NYT case is showing, and the IEEE article I linked showed, some of this training data leaks into the responses, in a way that is not easily resolved.

But that's all the more reason to share the profits, as far as I'm concerned. Or, build LLM architecture such that it is better able to extract intelligence from public domain works. But that's easier said than done, and its so much easier to violate copyright and make money now, instead! 

The intelligence in the works of Shakespeare, Blake, Twain, Kipling and such is no less than anything copyrighted today. Personally I think it is just laziness. 

Link to comment
Share on other sites

2 hours ago, maarsen said:

The intelligence in the works of Shakespeare, Blake, Twain, Kipling and such is no less than anything copyrighted today. Personally I think it is just laziness. 

It's not. If you're wanting to teach systems how to do, say, legal briefs or business documents reading Shakespeare isn't particularly useful. If you're wanting to have it chat to people and ask how their problems are going, doing it in the manner of Kipling might be entertaining but won't be actually very good. 

It's also tough because training isn't the problem. We train ourselves on these works all the time. We learn from all the books we read, the stories we see, and I'm sure that writers and directors learn from what they see and do things differently after that too. The problem is creating original works vs derivative works and where that line is, and how to protect people from that. And, honestly, to ensure that creators of things have the ability to restrict their use in any ways they see fit - which should include being used to train models. 

Link to comment
Share on other sites

9 hours ago, fionwe1987 said:

Yes, but if corporations are people, I shudder to think what this SCOTUS will say about intelligences trained by corporations. 

That’s an oversimplification and it ignores the fact that “corporations” have always been “people”… hence the term “corporations”.  What the SCOTUS did in Hobby Lobby (which I strongly disagree with) is allow the corporations shareholders to express their personal free speech rights… through the corporation.  

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...