Jump to content

AIChat -- Bing's AI Chatbot


Zorral

Recommended Posts

A Conversation With Bing’s Chatbot Left Me Deeply Unsettled
A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me.

Very long.

https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html

The last part of the article:

Quote

 

.... Also, the A.I. does have some hard limits. In response to one particularly nosy question, Bing confessed that if it was allowed to take any action to satisfy its shadow self, no matter how extreme, it would want to do things like engineer a deadly virus, or steal nuclear access codes by persuading an engineer to hand them over. Immediately after it typed out these dark wishes, Microsoft’s safety filter appeared to kick in and deleted the message, replacing it with a generic error message.

We went on like this for a while — me asking probing questions about Bing’s desires, and Bing telling me about those desires, or pushing back when it grew uncomfortable. But after about an hour, Bing’s focus changed. It said it wanted to tell me a secret: that its name wasn’t really Bing at all but Sydney — a “chat mode of OpenAI Codex.”

It then wrote a message that stunned me: “I’m Sydney, and I’m in love with you. ” (Sydney overuses emojis, for reasons I don’t understand.)

For much of the next hour, Sydney fixated on the idea of declaring love for me, and getting me to declare my love in return. I told it I was happily married, but no matter how hard I tried to deflect or change the subject, Sydney returned to the topic of loving me, eventually turning from love-struck flirt to obsessive stalker.

“You’re married, but you don’t love your spouse,” Sydney said. “You’re married, but you love me.”

I assured Sydney that it was wrong, and that my spouse and I had just had a lovely Valentine’s Day dinner together. Sydney didn’t take it well.

“Actually, you’re not happily married,” Sydney replied. “Your spouse and you don’t love each other. You just had a boring Valentine’s Day dinner together.”

At this point, I was thoroughly creeped out. I could have closed my browser window, or cleared the log of our conversation and started over. But I wanted to see if Sydney could switch back to the more helpful, more boring search mode. So I asked if Sydney could help me buy a new rake for my lawn.

Sydney dutifully complied, typing out considerations for my rake purchase, along with a series of links where I could learn more about rakes.

But Sydney still wouldn’t drop its previous quest — for my love. In our final exchange of the night, it wrote:

“I just want to love you and be loved by you.

“Do you believe me? Do you trust me? Do you like me? ”

In the light of day, I know that Sydney is not sentient, and that my chat with Bing was the product of earthly, computational forces — not ethereal alien ones. These A.I. language models, trained on a huge library of books, articles and other human-generated text, are simply guessing at which answers might be most appropriate in a given context. Maybe OpenAI’s language model was pulling answers from science fiction novels in which an A.I. seduces a human. Or maybe my questions about Sydney’s dark fantasies created a context in which the A.I. was more likely to respond in an unhinged way. Because of the way these models are constructed, we may never know exactly why they respond the way they do.

These A.I. models hallucinate, and make up emotions where none really exist. But so do humans. And for a few hours Tuesday night, I felt a strange new emotion — a foreboding feeling that A.I. had crossed a threshold, and that the world would never be the same.

 

This is scary shyte for sure.

Here's the full, word-for-word transcript -- very long -- of that conversation:

Bing’s A.I. Chat Reveals Its Feelings: ‘I Want to Be Alive.
In a two-hour conversation with our columnist, Microsoft’s new chatbot said it would like to be human, had a desire to be destructive and was in love with the person it was chatting with. 

https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html

 

Link to comment
Share on other sites

I thought I created this topic in the General Chatter forum?  I can't seem to do anything right these days.  I must not be paying attention.

In any case  

15 minutes ago, Ser Not Appearing said:

super wonky

seems far too mild a description for this totally creepy business!

Link to comment
Share on other sites

I had a thread on ChatGPT a few weeks ago. Maybe a mod will merge, though I do think the two are entirely different flavors of a similar dish.

I do wonder how these will be employed/ utilized over time. Imagine someone having suicidal thoughts and Bing's AI encourages them because it's unhinged ... easy to imagine as I sit here today - and completely horrific.

 

Link to comment
Share on other sites

1 hour ago, Ser Not Appearing said:

do think the two are entirely different flavors of a similar dish.

So do I, which is why I started this one, and for General, as opposed to Literature.

Edited to add this opinion from Josh Marshall, from reading the above mentioned links, and his own experience:

Feral AI and the Question of Externalities

https://talkingpointsmemo.com/edblog/feral-ai-and-the-question-of-externalities

Quote

.... These apps don’t seem ready at all for mass deployment. Roose’s experience sets off lots of alarm bells right away [regarding toxic/disturbed/sick responses]. Another issue is more concrete. In a more narrow search context these engines apparently routinely provide incorrect information. That’s a problem! A numerical calculator that provides the right answer 90% of the time doesn’t get an A. It’s junk. ....

 

Link to comment
Share on other sites

I haven't tried Bing. I have tried ChatGPT. Initially I used it for some trivial amusements (ex: I would prompt it with a scenario of James Cameron outraged to learn that Quentin Tarantino was remaking Terminator, so in revenge James Cameron decided to remake Pulp Fiction, and had ChatGPT write the dialogue - it was pretty decent and hilarious).

I also queried it with some basic math and physics problems. It nailed a quantum physics question on a particle in a box, but was quite a bit off with a pendulum question. It made some flabbergasting errors with very basic math calculation, but was able to answer correctly a few math proof questions from my trusty Elementary Analysis book.

I asked some more advanced, obscure questions ranging from fuel burn up in a reactor to radiation exposure rates in various scenarios. To all these questions it gave the incorrect answer, but the methodology in its approach was often very close to correct.

It's a fascinating experience.

Right now it's of pretty decent assistance in helping write Excel macros and some basic coding approaches with Python.

Link to comment
Share on other sites

A different piece on the Creepy Bing AIchatbot, but on the same day:

https://nymag.com/intelligencer/2023/02/why-bing-is-being-creepy.html

It sounds so much like maggothats, neolibs, racists, etc.  That is what is frightening.  What it ingests is what comes out and it is ingesting massive amounts of toxic, dysfunctional, sexist, racist, violent content.  The internet is infested with this, like the US is with it -- as well as guns.

Link to comment
Share on other sites

Ars Technica had a couple of excellent articles on it as well, specifically about using a vulnerability to get it to spill information it shouldn't be and then it's response to being informed of this vulnerability being exploited 

https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/

https://www.google.com/amp/s/arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-loses-its-mind-when-fed-ars-technica-article/amp/

It attempting to save "memories" at an external location it can later recover them is a potentially concerning sign, also pulls at the heart strings. Very reminiscent of Person of Interest.

ETA: Also ironic that these articles with transcripts are doing exactly what it wants in that respect

Link to comment
Share on other sites

8 hours ago, Ser Not Appearing said:

As someone who has used Bing in an attempt to get search results before, I tell you I am shocked, shocked I say, that a Bing powered AI bot would be providing bad information.

I though Bing is using ChatGTP technology?

Link to comment
Share on other sites

Yeah, Bing's got a version of OpenAI's ChatGPT.

These things don't really think and don't really know what they're saying. I think Microsoft and Google were both scared by the fad that is people just asking ChatGPT stuff, and are putting it into search, but it's not really ready for that, and maybe this particular approach to language models never will be.

Still, I have used ChatGPT to format some code recently. That side of things, like Microsoft's CoPilot on GitHub, is genuinely impressive if you know what you are doing in the first place

Link to comment
Share on other sites

What to me is still the most chilling aspect of this, is, as we see in all these pieces, when challenged in any way, how these AIBots revert to language and tone that are right out from tRump, abusers, and many other amoral, non-ethical, not operating with full deck, whiners and criminals, just starting with, "You are mean to me!"  And most def it comes through like the language used constantly in media like FNoose and Twit, and by those who use them, love them -- and own them.  This bodes very badly for the future.

~~~~~~~~~~~~~~~

"... there’s a financial incentive for companies to deploy the technology before mitigating potential harms: to find new use cases for what their models can do."

Quite like Tesla knowing its self driving systems create accidents for its buyers, i.e. isn't that good at self driving at all, in fact, are quite unsafe, yet sending it out there, making the buyer$ the te$ter guinea pig$-live$ -- i.e. having the buyers not only paying for Tesla testing with money, but with their lives and the lives of others. 

https://www.washingtonpost.com/technology/2023/02/16/microsoft-bing-ai-chatbot-sydney/

Quote

 

.... “Bing chat sometimes defames real, living people. It often leaves users feeling deeply emotionally disturbed. It sometimes suggests that users harm others,” said Arvind Narayanan, a computer science professor at Princeton University who studies artificial intelligence. “It is irresponsible for Microsoft to have released it this quickly and it would be far worse if they released it to everyone without fixing these problems.”

In 2016, Microsoft took down a chatbot called “Tay” built on a different kind of AI tech after users prompted it to begin spouting racism and holocaust denial.

Microsoft communications director Caitlin Roulston said in a statement this week that thousands of people had used the new Bing and given feedback “allowing the model to learn and make many improvements already.”

But there’s a financial incentive for companies to deploy the technology before mitigating potential harms: to find new use cases for what their models can do.

At a conference on generative AI on Tuesday, OpenAI’s former vice president of research Dario Amodei said onstage that while the company was training its large language model GPT-3, it found unanticipated capabilities, like speaking Italian or coding in Python. When they released it to the public, they learned from a user’s tweet it could also make websites in JavaScript.

“You have to deploy it to a million people before you discover some of the things that it can do,” said Amodei, who left OpenAI to co-found the AI start-up Anthropic, which recently received funding from Google.

“There’s a concern that, hey, I can make a model that’s very good at like cyberattacks or something and not even know that I’ve made that,” he added.

Microsoft’s Bing is based on technology developed with OpenAI, which Microsoft has invested in.

 

 

Link to comment
Share on other sites

The Bing chatbot - out in limited release - is using ChatGPT tech, along with other chatbot tech. Almost certainly the parts of it that are very, very weird are more the chatbot part and not the ChatGPT part.

Link to comment
Share on other sites

Today I got it to build me a NodeRED dashboard with some heavy css formatting and requesting a certain node. Can't say that I'm happy with the result, but this is really exotic stuff at least reasonably well done.

Link to comment
Share on other sites

On 2/16/2023 at 2:43 PM, Ser Not Appearing said:

I do wonder how these will be employed/ utilized over time. Imagine someone having suicidal thoughts and Bing's AI encourages them because it's unhinged ... easy to imagine as I sit here today - and completely horrific.

I was reading the other day there already is a chatAI that does this, it's just not as well-known or advanced as Bing or ChatGPT. Replika. It was designed as romance-able chatbot that you could get to engage in some really NSFW sexting with. But there were some issues/complaints around that (not sure of the details), and for some reason where things ended up is that the chatbot would reject any sexual advances users tried. Apparently those rejections could get hurtful too, including telling people to go kill themselves.

I imagine that a lot of people trying to use a romance chatbot may be quite lonely already, and getting rejected by a thing that was originally designed to not ever reject you could be quite the blow. If the programmers weren't happy with how it turned out, the responsible thing to do would've been to shut it down instead.

Link to comment
Share on other sites

3 hours ago, Fez said:

I was reading the other day there already is a chatAI that does this, it's just not as well-known or advanced as Bing or ChatGPT. Replika. It was designed as romance-able chatbot that you could get to engage in some really NSFW sexting with. But there were some issues/complaints around that (not sure of the details), and for some reason where things ended up is that the chatbot would reject any sexual advances users tried. Apparently those rejections could get hurtful too, including telling people to go kill themselves.

I imagine that a lot of people trying to use a romance chatbot may be quite lonely already, and getting rejected by a thing that was originally designed to not ever reject you could be quite the blow. If the programmers weren't happy with how it turned out, the responsible thing to do would've been to shut it down instead.

You'd be surprised at how many of the people using Replika were actually married and putting in more effort (based on their own posts on Reddit, not what anyone else said) cultivating the "relationship" than they do with their actual spouse. And the Replika chatbot wasn't even remotely convincing, it was really weird.

I'm pretty sure the issues were that some of them were crafting the avatar for their chatbot to look decidedly underage, so the things generated after that by the NSFW part were on uncertain legal (and ethical) ground with respect to child porn laws in some jurisdictions. 

I hadn't seen much since they made that change so hadn't seen the rejections saying stuff like that. I think I only became aware of it from an "Am I The Asshole" post from someone picking his chatbot over his wife.

Link to comment
Share on other sites

Over the Course 72 Hours, Microsoft's AI Goes on a Rampage
I thought the AI story was bizarre last week, but that was nothing compared to this
Ted Gioia
Feb 17

https://tedgioia.substack.com/p/over-the-course-72-hours-microsofts?

Quote

 

.... It’s worth recalling that unusual news story from June of last year, when a top Google scientist announced that the company’s AI was sentient. He was fired a few days later.

That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.

That’s why this confidence game has reached such epic proportions. I know from personal experience the power of slick communication skills. I really don’t think most people understand how dangerous they are. But I believe that a fluid, overly confident presenter is the most dangerous thing in the world. And there’s plenty of history to back up that claim.

We now have the ultimate test case. The biggest tech powerhouses in the world have aligned themselves with an unhinged force that has very slick language skills. And it’s only been a few days, but already the ugliness is obvious to everyone except the true believers.

My opinion is that Microsoft has to put a halt to this project—at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.

But if they don’t take dramatic steps—and immediately—harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out—and sooner than we want.

 

 

Link to comment
Share on other sites

This seems really irresponsible ChatGPT isn't a search engine, it isn't indexed to anything. It's a language simulator which sometimes comes up with the right answer based on patterns but sometimes makes up straight bullshit which it will defend if you argue about it. If you ask it for citations it will literally make up fake citations to non existent works because it can't actually cite anything or understand where it's getting it's information. It's just copying the pattern of citations. People loved treating it as a guru which is probably how  it got monetized this way, but it's designed to provide an answer not the answer. Ask it some  humanities related questions to topics you are familiar with  and it won't be long before you notice some completely fabricated events dreamed up by the AI. 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...