Jump to content

Artificial Intelligence


fionwe1987
 Share

Recommended Posts

I try not to give the topic much head-spacetime because being an artist the Ai art generators really pissed me off, but I'm kind of with TrueMetis on this. 

---

On the occasion I'm more charitable, because distracted by actual important things or whatever, when people start talking about sentience/consciousness in relation to AI -not going to lie- shit makes me chuckle.

Ask ChatGPT to describe itself [don't fuck with it myself] it'll probably spit out some gobbledegook about what it was designed to do. Ask it to describe its surroundings, hell, that would likely draw a blank if not a lie. Intelligent, sure, but can you be arguably conscious without experiencing other individuals or your surroundings in relation to self?

Experiment: Isolate a raw AI in some kind of sensory package. It's got all the algos or whatever nerdspak, but no prior frames of reference. Give it some eyes. Give it some ears. Not sure where we're at with sniffers yet [or taste touch] but bequeath it whatever facsimiles we have of those too and let it learn from its environment as if a babe.

Instead of pre-serving it everything we already know, read to it. Then teach it to read. Take it for a walk in a wagon and show it the local environ. What would it make of an excited puppy wet nosing it? Hell, make a bunch of them and let them interact with each other every now and then like a gaggle of children.

Would they ever evolve to the point of I?

I am.

I think.

I feel?

Would they develop different personalities like we do [or more primitive consciousness', even] might they get cliquey; would anything actually move them?

---

 

So yeah, until then it's just mimicry and y'all can miss me with the whole sentient, conscious AI sheesh. :p  

          

Edited by JGP
Link to comment
Share on other sites

6 hours ago, Kalbear said:

And you are not doing what you are programmed to?

I think westworld said it best. It wasn't the discovery of incredibly complicated algorithms that enabled simulating people: it was the realization that people are far more simple than we thought. 

As to the rest - is an ant showing interest when it goes for sweet food? Is a baby? 

 

Who programmed me?  

Link to comment
Share on other sites

6 hours ago, fionwe1987 said:

LLM's often don't do what they're programmed to do. This is because they're programmed to give out stochastic results, and also programmed to "not be racist", for instance. Again, none of this implies they're necessarily sentient or conscious. But they are not "bound" by their programming in a way typical computer programs are.

They are bound by the results of what they find from the data they are programmed to pull. The fact that they cannot perceive “racism” or be taught to perceive “racism” is, again, a strike against actual consciousness/sentience.

Edited by Ser Scot A Ellison
Link to comment
Share on other sites

Posted (edited)
1 hour ago, Ser Scot A Ellison said:

They are bound by the results of what they find from the data they are programmed to pull.

Again, this is absolutely not true. They do not pull data when they function. You can download smaller models to your computer yourself, disconnect the internet, give them access to no files in your computer except the files that the model comes with, and test this out yourself. 

1 hour ago, Ser Scot A Ellison said:

The fact that they cannot perceive “racism” or be taught to perceive “racism” is, again, a strike against actual consciousness/sentience.

They can be taught. They just won't do it 100% of the time, or do it accurately all the time. They'll especially miss new and sneaky ways people can be racist that they may not have encountered in their training. But isn't this true of humans, too?

Again, I really don't think they're sentient or conscious. But the reasons you bring up are not why. 

Edited by fionwe1987
Link to comment
Share on other sites

38 minutes ago, fionwe1987 said:

They do not pull data when they function. You can download smaller models to your computer yourself, disconnect the internet, give them access to no files in your computer except the files that the model comes with, and test this out yourself. 

Huh?  How can a learning model system learn when not pulling data?  Let me put it another way.  Is the algorithm synthesizing new information or providing analysis of existing information based upon its mimicry of other analysis it can perceive?  Can these programs ask questions that haven’t been asked before?

Edited by Ser Scot A Ellison
Link to comment
Share on other sites

5 minutes ago, Ser Scot A Ellison said:

Huh?  How can a learning model system learn when not pulling data?  

They do not simultaneously learn and function. There's the training period, when they do indeed have access to tons of text data. Then there's refinement of the model with human feedback. By the time you interact with it and ask it questions, the model is locked, and no longer "learning". Nor does it have access to training data, just what it larned from the training data. 

5 minutes ago, Ser Scot A Ellison said:

Let me put it another way.  Is the algorithm synthesizing new information or providing analysis of existing information based upon its mimicry of other analysis it can perceive?  Can these programs ask questions that haven’t been asked before?

They can certainly synthesize new information. That's kinda how they end up "hallucinating" or making mistakes. There are examples galore of these models confidently assigning revenue and profit figures to companies that are nowhere to be found, for instance. Or making up names and titles, or inventing historical events.

And yes, they can ask questions that haven't been asked before (and were not in their training data), if what you prompt them to do is ask questions. 

Link to comment
Share on other sites

AI has also been providing designs for new drug molecules (and not just any random molecules, but ones that can actually be synthesized) which humans have not come up with yet. I have to say, I am puzzled by this train of thought, there is nothing mystical or magical about the ways humans derive these things either, and any time you can articulate a thought process, you should hypothetically be able to simulate it.

Link to comment
Share on other sites

58 minutes ago, fionwe1987 said:

And yes, they can ask questions that haven't been asked before (and were not in their training data), if what you prompt them to do is ask questions. 

That’s interesting.  Are the questions random gobbledygook that don’t pertain to the subject or are they genuinely insightful questions that come from curiosity and inspiration?  

ETA:

This is not to say humans don’t ask gobbledygook questions too.

Edited by Ser Scot A Ellison
Link to comment
Share on other sites

2 hours ago, Ser Scot A Ellison said:

Who programmed me?  

A massive amount of training data - trillions of images combined with sound, chemical data, physical data and reinforcement training data. 

Link to comment
Share on other sites

Just now, Kalbear said:

A massive amount of training data - trillions of images combined with sound, chemical data, physical data and reinforcement training data. 

So… no people programmed me.  Life programmed me and everything else that is alive under your logic.

Link to comment
Share on other sites

2 minutes ago, Ser Scot A Ellison said:

So… no people programmed me.  Life programmed me and everything else that is alive under your logic.

By that token most LLMs are alive too - no one programmed them jn that way, they were just fed reams of training data.

Just like you.

Now, if you want to ask who developed the framework for how you can be taught, that'd be millions of iterations and mutations across a billion years of change. And that would be different from AIs only in time frames, not behaviors - many AI developments start with a base model and then are allowed to evolve as well.

I honestly don't see the relevance of this path of questioning. Are you trying to say that only living creatures can ever be intelligent by definition? Or are you saying that only certain types of input are acceptable to determine sentience, and if you don't get them that way you are not sentient?

Link to comment
Share on other sites

6 hours ago, JGP said:

 

Instead of pre-serving it everything we already know, read to it. Then teach it to read. Take it for a walk in a wagon and show it the local environ. What would it make of an excited puppy wet nosing it? Hell, make a bunch of them and let them interact with each other every now and then like a gaggle of children.

Would they ever evolve to the point of I?

I am.

I think.

I feel?

Would they develop different personalities like we do [or more primitive consciousness', even] might they get cliquey; would anything actually move them?

---

 

So yeah, until then it's just mimicry and y'all can miss me with the whole sentient, conscious AI sheesh. :p  

          

So the short answer is yes. When we have done experiments with various AI models some of the things they do are pretty remarkable. They will do things like functionally invent new languages or language uses for talking with each other and sometimes specific language meanings to talk with specific other AI units. 

That said - all of the things you mentioned are very ape-spevific. Cliques, in groups- those are things we see in all social mammals and are derived from specific moral centers in the brain, are universal to apes and humans and can be turned off or broken with divergent brain types or damage or drugs. Being a snarky little asshat surely is not what defines intelligence?

Link to comment
Share on other sites

24 minutes ago, Kalbear said:

By that token most LLMs are alive too - no one programmed them jn that way, they were just fed reams of training data.

Just like you.

Now, if you want to ask who developed the framework for how you can be taught, that'd be millions of iterations and mutations across a billion years of change. And that would be different from AIs only in time frames, not behaviors - many AI developments start with a base model and then are allowed to evolve as well.

I honestly don't see the relevance of this path of questioning. Are you trying to say that only living creatures can ever be intelligent by definition? Or are you saying that only certain types of input are acceptable to determine sentience, and if you don't get them that way you are not sentient?

I’m saying there are certain behaviors I would expect of sentient/conscious beings.  If AI is engaging in these behaviors without our prompting (I do see the catch) I’m willing to re-evaluate whether “AI” as it exists today is conscious. 

Link to comment
Share on other sites

1 hour ago, IheartIheartTesla said:

AI has also been providing designs for new drug molecules (and not just any random molecules, but ones that can actually be synthesized) which humans have not come up with yet. I have to say, I am puzzled by this train of thought, there is nothing mystical or magical about the ways humans derive these things either, and any time you can articulate a thought process, you should hypothetically be able to simulate it.

Yep. And to me, these are all signs of intelligence. Just that, though. 

56 minutes ago, Ser Scot A Ellison said:

That’s interesting.  Are the questions random gobbledygook that don’t pertain to the subject or are they genuinely insightful questions that come from curiosity and inspiration?  

I have no idea where they come from. No one does. But you can definitely get genuinely insightful and interesting questions pertaining to a subject. And also gobbledygook. 

56 minutes ago, Ser Scot A Ellison said:

ETA:

This is not to say humans don’t ask gobbledygook questions too.

Right, which is why none of this says this iteration of AI is human level intelligent. And definitely, any statements about sentence or consciousness are blather, at this point, too. But they're certainly intelligent, in a meaningful way, well past the kind of semantic rules based AI we used to have before this past decade. 

Link to comment
Share on other sites

17 minutes ago, Ser Scot A Ellison said:

I’m saying there are certain behaviors I would expect of sentient/conscious beings.  If AI is engaging in these behaviors without our prompting (I do see the catch) I’m willing to re-evaluate whether “AI” as it exists today is conscious. 

I don't think anyone here is arguing that AI currently is conscious. Even I am merely questioning the value of assigning such a description when the notion is poorly understood.

But several of us are contending with the simplified portrayal of AI as "just a program".

Intelligence is not binary, where you either exhibit full human-like intelligence or you're not intelligent. There's a very large spectrum of behaviors which are considered "intelligent". Regardless of the mechanism of AI, the result is that it is exhibiting some of those behaviors, as fionwe and Kalbear are stating.

Link to comment
Share on other sites

34 minutes ago, IFR said:

I don't think anyone here is arguing that AI currently is conscious. Even I am merely questioning the value of assigning such a description when the notion is poorly understood.

But several of us are contending with the simplified portrayal of AI as "just a program".

Intelligence is not binary, where you either exhibit full human-like intelligence or you're not intelligent. There's a very large spectrum of behaviors which are considered "intelligent". Regardless of the mechanism of AI, the result is that it is exhibiting some of those behaviors, as fionwe and Kalbear are stating.

I think what confuses me is the acknowledgement of a lack of consciousness or sentience… but the claim that LLMs are more than simple programs.  Can they change their own algorithms?

Link to comment
Share on other sites

11 minutes ago, Ser Scot A Ellison said:

I think what confuses me is the acknowledgement of a lack of consciousness or sentience… but the claim that LLMs are more than simple programs.  Can they change their own algorithms?

Not yet, though we're nipping at the heels of that. Already, there are successful attempts to use one LLM to fine-tune another. And LLMs can write code. We're not that far from LLMs tweaking themselves.

It should be noted that LLMs aren't comprised of lines of code in the traditional sense. They're black boxes. I cannot take an LLM a code and read it, and make any kind of sense of what it will do.

Certainly, no one can predict exactly how an LLM will behave. ChatGPT, for instance, got "lazy" briefly, a few months ago. This happened without any update to the code itself. People noticed it was giving briefer answers, that were less thorough. I noticed this myself. 

Some folks noticed this happens right around the start of daylight savings. And there was some data showing that if you fooled the LLM into thinking it was summer, it did better. 

Dunno if that held up. OpenAI doesn't reveal what tweaks they make to their model or how they resolve such issues. But the very fact that LLMs have behavior we have to infer and that we can probe, but not diagnose by reading some code, should tell you this is a different beast than any old computer program.

LLMs do not have constrained behavior, in the sense that they are not bound to produce specific types of text, based on coded rules. And indeed, the same prompt can result in wildly different responses, at different times.

What patterns an LLM sees in its training data is multidimensional to the point of incomprehensibility to us in any currently used language. Which is why controlling them is so damned difficult. And why not wholly unintelligent people lose their minds sometimes and see the ghosts of sentence in them. They do inexplicable things. 

 

Link to comment
Share on other sites

7 minutes ago, fionwe1987 said:

Not yet, though we're nipping at the heels of that. Already, there are successful attempts to use one LLM to fine-tune another. And LLMs can write code. We're not that far from LLMs tweaking themselves.

It should be noted that LLMs aren't comprised of lines of code in the traditional sense. They're black boxes. I cannot take an LLM a code and read it, and make any kind of sense of what it will do.

Certainly, no one can predict exactly how an LLM will behave. ChatGPT, for instance, got "lazy" briefly, a few months ago. This happened without any update to the code itself. People noticed it was giving briefer answers, that were less thorough. I noticed this myself. 

Some folks noticed this happens right around the start of daylight savings. And there was some data showing that if you fooled the LLM into thinking it was summer, it did better. 

Dunno if that held up. OpenAI doesn't reveal what tweaks they make to their model or how they resolve such issues. But the very fact that LLMs have behavior we have to infer and that we can probe, but not diagnose by reading some code, should tell you this is a different beast than any old computer program.

LLMs do not have constrained behavior, in the sense that they are not bound to produce specific types of text, based on coded rules. And indeed, the same prompt can result in wildly different responses, at different times.

What patterns an LLM sees in its training data is multidimensional to the point of incomprehensibility to us in any currently used language. Which is why controlling them is so damned difficult. And why not wholly unintelligent people lose their minds sometimes and see the ghosts of sentence in them. They do inexplicable things. 

 

That is interesting.  Thank you for the explanation.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...