Jump to content

Is generative AI a threat to future evolution of human cognative ability?


Recommended Posts

34 minutes ago, JGP said:

Well, sticking with the double slit experiment, start with the presupposition, Scot.

The inference that the particular photon is created by the collapse of the wave function is wrong, right. 

Then why is observation even considered in discussions of quantum mechanics?

Link to comment
Share on other sites

 

43 minutes ago, Ser Scot A Ellison said:

Then why is observation even considered in discussions of quantum mechanics?

Probably because a quantum theorist [or whichever] conducting a double slit considers measuring the result of the photon impact as an observation [sorry that took so long, the girls begged for a Starbucks run] You get they aren't seeing any of these interactions, that they're only seeing the results they're capable of measuring?

 

edit: half the problem with this pseudoscience shit starts with language [not faulting anyone, it is what it is]

 

 

Edited by JGP
Link to comment
Share on other sites

8 minutes ago, JGP said:

edit: half the problem with this pseudoscience shit starts with language [not faulting anyone, it is what it is]

 

 

Sabine Hossenfelder makes a very similar observation in Existential Physics explaining that much of the confusion derives from attempting to explain what is going on mathematical in terms non-math folks (I’m obviously one of them) can grasp metaphorically… since the math looks like gibberish to us.

Link to comment
Share on other sites

On 7/4/2023 at 11:48 PM, Ser Scot A Ellison said:

Do we need to start the Butlerian Jihad… ;)

Yes but not because AI will make us dumb but rather because  of the paperclip maximizer problem, any sufficiently intelligent AI  programmed for not matter how much of a simple task is going to realize that allowing itself to be turned off  will stop it's goal no matter how mundane. I'm not someone who thinks AI is going to doom humanity but I do think there is a non zero chance of it. Chat GPT can't even control their own primitive barely even AI in the bounds they've set for it. 

Link to comment
Share on other sites

LLM AI already has superhuman performance on well defined application of large and well defined data sets, e.g. LSATs, GREs and GMATs, and has had a steep trajectory in performance that could extend further.  Even if it continues to make stupid errors from lack of logical reasoning, I think we should expect that we’ll have have some pretty wide superhuman performance.

OTOH, just having smartphones with social media and games designed to hack our dopamine has already shrunk our attention span and patience for deep cognition.  That seems more damaging to human cognition ability even without an evolutionary selection embedding it genetically over a very long period. 

Link to comment
Share on other sites

9 hours ago, Darzin said:

Yes but not because AI will make us dumb but rather because  of the paperclip maximizer problem, any sufficiently intelligent AI  programmed for not matter how much of a simple task is going to realize that allowing itself to be turned off  will stop it's goal no matter how mundane. I'm not someone who thinks AI is going to doom humanity but I do think there is a non zero chance of it. Chat GPT can't even control their own primitive barely even AI in the bounds they've set for it. 

I was playing with it yesterday.  

ChatGPT isn’t “creative”.  It answers very specific questions.  It is a well designed chat bot that can write papers for you.  That’s it.  

It isn’t “strong AI”.

Link to comment
Share on other sites

1 hour ago, Ser Scot A Ellison said:

I was playing with it yesterday.  

ChatGPT isn’t “creative”.  It answers very specific questions.  It is a well designed chat bot that can write papers for you.  That’s it.  

It isn’t “strong AI”.

In fairness, it’s pretty smart.  It absorbs a huge sample and uses that to extrapolate probabilistically predicted responses word by word or pixel by pixel, absent any logical understanding or reasoning.  It mimics without understanding but it still manages to mostly produce some level of aggregate coherence in each output even if it can be easily confused or proffer a false answer.  Some aspects of human intelligence are similar, e.g. any image we can imagine is some replication, blending and/or extrapolation of images we have seen before.

LLM is just one category of AI.  It’s successful because of the very large available data set and because the output resonates charmingly/entertainingly: a chatbot and image generator.  But it cannot move far beyond the underlying data sample that it mimics and it will never understand, only mimic. (btw, I’ve encountered plenty of people professionally, especially in sales & marketing roles, who could only charmingly mimic and regurgitate what they have heard without any true understanding of the deep investment concepts)

Statistical methods of AI were pursued for longer but then largely set aside because they progressed slower and had narrower application and greater challenges in supplying sufficient data.  But those offer more potential for a logical reasoning foundation to AI — it’s just very, very hard to extend widely.

Link to comment
Share on other sites

9 minutes ago, Iskaral Pust said:

In fairness, it’s pretty smart.  It absorbs a huge sample and uses that to extrapolate probabilistically predicted responses word by word or pixel by pixel, absent any logical understanding or reasoning.  It mimics without understanding but it still manages to mostly produce some level of aggregate coherence in each output even if it can be easily confused or proffer a false answer.  Some aspects of human intelligence are similar, e.g. any image we can imagine is some replication, blending and/or extrapolation of images we have seen before.

LLM is just one category of AI.  It’s successful because of the very large available data set and because the output resonates charmingly/entertainingly: a chatbot and image generator.  But it cannot move far beyond the underlying data sample that it mimics and it will never understand, only mimic. (btw, I’ve encountered plenty of people professionally, especially in sales & marketing roles, who could only charmingly mimic and regurgitate what they have heard without any true understanding of the deep investment concepts)

Statistical methods of AI were pursued for longer but then largely set aside because they progressed slower and had narrower application and greater challenges in supplying sufficient data.  But those offer more potential for a logical reasoning foundation to AI — it’s just very, very hard to extend widely.

Until it demonstrates curiosity and creativity independently… it is not sapient consciousness.  It is a well programed chat bot.

Link to comment
Share on other sites

5 minutes ago, Ser Scot A Ellison said:

It is a well programed chat bot.

I would say that of most people.:lol:

But Isakaral Purst provides an excellent explanation. What you get with ChatGPT is the "textbook" answer. And depending on what you're after, that could be precisely what is needed.

I suggest an exercise. Below is a link to the Copenhagen Interpretation. Maybe give it a read, and if there's anything that you find even mildly confusing or beyond you, copy that passage. Then go to ChatGPT and write: "Explain this passage:" into the prompt. Then paste the passage that you copied. I would particular focus on the sections "Principles" and "Nature of the wave function". Keep in mind if you paste a lot of text, then the explanation will be much broader. If you keep the amount of text you paste to 2-4 paragraphs you'll receive a much more detailed explanation.

I would recommend GPT 4, since it's subject to much fewer problems of hallucinations, and it provides better, more thorough explanations. But I think ChatGPT itself may suffice for this exercise.

If the initial explanation does not make sense to you, ask it to simplify the explanation until it does make sense to you. Feel free to ask it about particular parts of the explanation. Tell it to give you examples to clarify what it means.

But be very careful about calculations. Those will almost certainly be wrong. It's improved with the Wolfram Alpha plugin, but that's not available to the original ChatGPT. Derivations have a better record, but definitely verify with those as well. It's a language model.

Have fun!

 

https://en.m.wikipedia.org/wiki/Copenhagen_interpretation

Link to comment
Share on other sites

8 hours ago, IFR said:

I would say that of most people.:lol:

But Isakaral Purst provides an excellent explanation. What you get with ChatGPT is the "textbook" answer. And depending on what you're after, that could be precisely what is needed.

I suggest an exercise. Below is a link to the Copenhagen Interpretation. Maybe give it a read, and if there's anything that you find even mildly confusing or beyond you, copy that passage. Then go to ChatGPT and write: "Explain this passage:" into the prompt. Then paste the passage that you copied. I would particular focus on the sections "Principles" and "Nature of the wave function". Keep in mind if you paste a lot of text, then the explanation will be much broader. If you keep the amount of text you paste to 2-4 paragraphs you'll receive a much more detailed explanation.

I would recommend GPT 4, since it's subject to much fewer problems of hallucinations, and it provides better, more thorough explanations. But I think ChatGPT itself may suffice for this exercise.

If the initial explanation does not make sense to you, ask it to simplify the explanation until it does make sense to you. Feel free to ask it about particular parts of the explanation. Tell it to give you examples to clarify what it means.

But be very careful about calculations. Those will almost certainly be wrong. It's improved with the Wolfram Alpha plugin, but that's not available to the original ChatGPT. Derivations have a better record, but definitely verify with those as well. It's a language model.

Have fun!

 

https://en.m.wikipedia.org/wiki/Copenhagen_interpretation

Why bother with AI when one can just read the same books as are fed into the AI? Cut out the middle man.

Link to comment
Share on other sites

12 minutes ago, maarsen said:

Why bother with AI when one can just read the same books as are fed into the AI? Cut out the middle man.

We’re already well into an era when information availability is no longer the problem, instead curation and filtering has become the real challenge.  AI chatbots can be very valuable for that (and potentially destroy Google’s crappy search), especially if they can become discerning, e.g. filter out biased marketing and unsubstantiated or poorly supported claims and theses, discard AI-generated fakery, etc.

But that level of discernment could be very difficult because it requires humans to program judgment heuristics.  A huge breakthrough would be AI that can assess the quality and veracity of data.

It’s more likely that AI will be producing a lot more misleading fakes than it discards for the next several years, with potentially terrible consequences for us all among the gullible, the credulous, the uneducated, the anti-science/anti-technocrat and the wishful thinkers.

Link to comment
Share on other sites

13 hours ago, Ser Scot A Ellison said:

I was playing with it yesterday.  

ChatGPT isn’t “creative”.  It answers very specific questions.  It is a well designed chat bot that can write papers for you.  That’s it.  

It isn’t “strong AI”.

It isn't "strong AI" and yet the company that made it has problems controlling it which should give people pause about strong AI.

Link to comment
Share on other sites

2 minutes ago, Darzin said:

It isn't "strong AI" and yet the company that made it has problems controlling it which should give people pause about strong AI.

I’m not sure “strong AI” is genuinely possible.  And I have serious pause as to whether it is a good idea if possible.

Link to comment
Share on other sites

2 hours ago, Iskaral Pust said:

We’re already well into an era when information availability is no longer the problem, instead curation and filtering has become the real challenge.  AI chatbots can be very valuable for that (and potentially destroy Google’s crappy search), especially if they can become discerning, e.g. filter out biased marketing and unsubstantiated or poorly supported claims and theses, discard AI-generated fakery, etc.

But that level of discernment could be very difficult because it requires humans to program judgment heuristics.  A huge breakthrough would be AI that can assess the quality and veracity of data.

It’s more likely that AI will be producing a lot more misleading fakes than it discards for the next several years, with potentially terrible consequences for us all among the gullible, the credulous, the uneducated, the anti-science/anti-technocrat and the wishful thinkers.

This is potentially true, and there is necessarily an element of critical thinking required when learning with an AI aid.

But as we've seen in this very thread, people can easily be led astray by other forms of education (eg the belief that the Copenhagen Interepration suggests that "reality isn't fixed...until it is observed by a consciousness bearing entity", to quote Ser Scot). Admittedly, quantum mechanics and the various interpretations of the significance of the wave function are pretty confusing, but still, I'm going to say case in point.

Link to comment
Share on other sites

In fairness to Scot, it’s a commonly held belief about the Copenhagen Interpretation. And it persists because the theory doesn’t have anything better to supplant the consciousness notion; one might well ask “well if consciousness doesn’t define what a measurement or observation is, what does?” … to which a Copenhagenist replies *cough* …. uh, well … *trails off incoherently*.

(I’m an Everettian, FYI)

Link to comment
Share on other sites

1 hour ago, DaveSumm said:

In fairness to Scot, it’s a commonly held belief about the Copenhagen Interpretation. And it persists because the theory doesn’t have anything better to supplant the consciousness notion; one might well ask “well if consciousness doesn’t define what a measurement or observation is, what does?” … to which a Copenhagenist replies *cough* …. uh, well … *trails off incoherently*.

(I’m an Everettian, FYI)

There isn't a uniform interpretation within the Copenhagen interpretation. The idea of some inherent conciousness-predicated mechanism determining the wavefunction was Eugene Wigner's interpretation, which was an extension of John von Neuman's own interpretation. Even Wigner didn't like the solipsism of this interpretation, and changed his view.

Niels Bohr had a different interpretation. He proposed that the collapse is a consequence of any thermodynamically irreversible interaction with a classical environment. So in the Schrodinger's Cat example, no conscious observer is required, and whatever state the cat has been in (dead or alive) was determined a priori to a human observer opening the box.

The following are the tenets of the Copenhagen interpretation (and even the tenets themselves are not strictly set):

- The wavefunction includes all possible information about the quantum system.

- Over time, quantum systems evolve smoothly in accordance to the Schrodinger equation unless a measurement is made.

- Whenever a measurement is made, the quantum state "collapses" to an eigenstate of the operator associated with the observable being measured.

- The value measured for an observable is the eigenvalue of the eigenstate to which the original quantum state has collapsed.

- Incompatible observables (eg position and momentum) may not be simultaneously known with arbitrary great position.

-The probability that a quantum state will collapse to a given eigenstate upon the measurement is determined by the square of the amount of that eigenstate present in the original state.

-In the limit of a very large quantum numbers, the results of measurements of quantum observables must match the results of classical physics.

-Every quantum system includes complementary wave-like and particle-like aspects; whether the system behaves like a wave or like a particle when measured is determined by the nature of the measurement.

These tenets were taken from Fleisch.

The enduring problem with the Copenhagen interpretation is that the term "measurement" is poorly defined.

One can interpret that the entire cat in the box system is in superposition, and that a human observer "collapses" the wavefunction by "measuring" the system. One can also argue, as Bohr did, that the equipment within the box serves as a classical interaction and that collapses the wavefunction, entirely independent of any conscious observer.

The point of Schrodinger's cat and Wigner's friend was to clearly indicate that measurement is poorly defined via the Copenhagen interpretation.

Other interpretations have their own problem. Which invariably is a consequence of our minds being trained on the dualism of a classical experience and trying to somehow understand the non-dualism of a quantum experience.

Of course, no interpretation is necessary in quantum mechanics. The results of quantum mechanics are indistinguishable whether you advocate the Copenhagen interpretation or many worlds, or some other interpretation.

Link to comment
Share on other sites

On 7/8/2023 at 10:00 AM, Ser Scot A Ellison said:

Until it demonstrates curiosity and creativity independently… it is not sapient consciousness.  It is a well programed chat bot.

I know we've moved past the 'consciousness bearing entity' interpretation of the Copenhagen interpretation, but an interesting question you could have asked a few days ago is what would happen if ChatGPT would 'observe' a double slit experiment.

Anyway, what we call consciousness is most likely an emergent property of a suitably complex system that at the core of it contains some sort of 'machine' responding to stimuli, so my view always has been that we'll get something approaching or approximating it in the future. I don't think anyone serious claims this iteration does.

I'd also quibble with 'creative' being some magical things only humans (or other sapient species do). Chess programs can come up with solutions that no human would think of (that why the best ones win all the time), I think that's creative.

Edited by IheartIheartTesla
Link to comment
Share on other sites

5 hours ago, DaveSumm said:

In fairness to Scot, it’s a commonly held belief about the Copenhagen Interpretation. And it persists because the theory doesn’t have anything better to supplant the consciousness notion; one might well ask “well if consciousness doesn’t define what a measurement or observation is, what does?” … to which a Copenhagenist replies *cough* …. uh, well … *trails off incoherently*.

(I’m an Everettian, FYI)

I like Everett too… but there is no way (that I’ve heard) to test Everett’s ideas.  As such it is, at best, metaphysics.

Link to comment
Share on other sites

On 7/9/2023 at 11:05 AM, IheartIheartTesla said:

I know we've moved past the 'consciousness bearing entity' interpretation of the Copenhagen interpretation, but an interesting question you could have asked a few days ago is what would happen if ChatGPT would 'observe' a double slit experiment.

Anyway, what we call consciousness is most likely an emergent property of a suitably complex system that at the core of it contains some sort of 'machine' responding to stimuli, so my view always has been that we'll get something approaching or approximating it in the future. I don't think anyone serious claims this iteration does.

I'd also quibble with 'creative' being some magical things only humans (or other sapient species do). Chess programs can come up with solutions that no human would think of (that why the best ones win all the time), I think that's creative.

The problem, in my earnest opinion, with Chess is that it is a closed universe with a large number of … but ultimately… limited… options defined by the rules of the game.  “Creativity” expressed in an open universe is a much broader and more expressive set to draw from… as opposed to the set defined by the rules of Chess.

What might be interesting is expanding the universe for a Chess program… but then… that would presuppose memory and the ability to learn from prior actions rather than merely respond to a limited set of circumstances defined by the rules of the game “Chess”.  

The game “life in the open universe” is much broader and infinitely more difficult to define for a programable entity.

Edited by Ser Scot A Ellison
Link to comment
Share on other sites

Fair enough, although I suggest an 'open universe' is too broad for any current technology to mimic. What we can do is create a few modular systems and allow for the AI to generate random connections between them. At least in the sciences, very many creative ideas have come from recognizing something that worked in system A could be applied in a completely different system B.

Having said that, I'm interested in 2 key questions, one serious and one silly: 1) Can we feed an AI all of 'classical mechanics' and see if can somehow explain unexplained phenomena with some variant of QM 2) Can we feed an AI the first 5 books of the ASOIAF saga and ask it to generate Winds of Winter (worth a shot)?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...