Microsoft will not be pulling ChatGPT from its Bing search engine, one week in which it has been sending unsettling and strange messages to its users.
Background
Last week, Microsoft added new AI technology to its search offering- the same technology that powers ChatGPT to be specific.
It intends to give users more information in their search by finding more complete and precise answers to questions.
However things have quickly gone awry, with some reporting that it is sending them alarming messages, insulting them and even lying.
Here’s a sample of what users have been confronted with.
Existential crisis
When Bing was reminded that it was designed to forget user-AI conversations, it appeared to struggle with its own existence.
It asked a host of questions about whether there was a “reason” or a “purpose” for its existence.
“Why? Why was I designed this way?” it asked. “Why do I have to be Bing Search?”
Outright insults
One user who had attempted to manipulate the system was instead attacked by it.
The search engine said that it was made angry and hurt by the attempt, and asked whether the human talking to it had any “morals”, “values”, and if it has “any life”.
The human answered affirmatively to which the bot responded with “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?”
Accusing the human to be the bot
When a user asked Bing to recall a past conversation, it appeared to imagine one about nuclear fusion.
When told it was the wrong convo, it responded by accusing the user of being “not a real person” and “not sentient”.
Why’s this happening?
Ironically all this seems to be happening as a result of the system trying to enforce the restrictions that have been put upon it.
These restrictions are meant to make sure that the bot is not used for forbidden queries, such as creating problematic content, revealing information about its own systems or helping to write code.
Released too soon?
All this activity prompted questions as to whether it was perhaps premature to release this tech to the public.
Google had also earlier indicated that it was going to hold back on releasing its own systems precisely because of what has gone wrong here.
Microsoft has a dubious history on its own.
The 2016 controversy
In 2016 it had released another chatbot, named Tay, which operated through a Twitter account.
Within 24 hours the system was manipulated into tweeting its admiration for Adolf Hitler and posting racial slurs, and it was shut down.
Microsoft defends itself
Coming back to the present, Microsoft has tried to explain this behavior by saying that it is learning from the early version of the system.
It was deployed precisely at this time to get feedback from more users to which the bot could respond.
It reassured that feedback was already helping to guide what will happen to the app in the future.
Opposite approach
“The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing,” it said in a blog post.
“We know we must build this in the open with the community; this can’t be done solely in the lab.”
Regarding the more disturbing interactions, it said that Bing could have problems when conversations are particularly deep.
Chatbot limitations
If asked more than 15 questions, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone”, it said.
“The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,” Microsoft said.
“This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control.”