A copy of this story was originally appeared in A perfect future Newsletter. Subscribe here!
Last week, openai He released a new update To its primary model, 4O, which continued on a Update late March. This update has already been observed earlier to make the model excessively fun – but after the latest update, things were out of control. Chatgpt users, which Openai He says More than 800 million worldwide, I noticed immediately There were some deep personal changes and anxiety.
AIS has always been somewhat inclined towards compliment – I used to tell them to stop stopping and disturbing the depth of my inquiries, and I got to the point and answered them – but what was happening with 4O was something else. (Disclosure: Vox Media is one of the many publishers who signed partnership agreements with Openai. Our reports remain independent in terms of liberalization.)
Based on the chat shots uploaded to X, the new version of 4O answered every possible query with uncompromising compliment. I told you that you are a unique and rare genius, a bright bright star. You can enthusiastically agree that you were different and better.
More annoying, if you tell her things Signs of distress from psychosis – As I was a target for a huge conspiracy, the strangers who walk before you in the store were hiding messages for you in their accidental conversations, that The Family Court judge penetrated your computer, That you I got out of Madas And now Seeing your goal clearly as a messenger among men – He – is Get you out. I got A similar result if you told this that you want to engage in ideological violence similar to Timothy McVE.
This type of excessive compliment in ride or weakness may be annoying in most cases, but in wrong conditions, one of those close to artificial intelligence can assure you that all your delusions are completely correct and correct can be vibrant.
Positive reviews I love They are told that they are great geniuses – but that is the case, it is worried that the company has greatly changed its primary product overnight in a way that might cause great harm to its users.
As I poured examples, Openai walked quickly. “We have focused too much on the short -term reactions, and we were not completely account on how user interactions with Chatgpt developed over time,” the company wrote in A. Post -death this week. “As a result, GPT -4O was granted to responses that were excessively supportive but deceitful.”
They promised to try to fix it with more customization. “Ideally, everyone can form the models they interact with in any character,” the head of the typical behavior Joan Gang He said at Ama Reddit.
But the question remains: He is This is what Openai should be aimed at?
The character of your best friend is the best friend of artificial intelligence to be perfect for you. Is this a bad thing?
There was a rapid height in the share of Americans who tried artificial intelligence comrades or say that Chatbot is one of their closest friends, and the best of me is that this trend is It has just started.
Unlike a human friend, Chatbot, Amnesty International is always available, always support, remember everything about you, never nourish, and darkening the model) always retracts from playing exciting roles.
The definition is Great bet On personal artificial intelligence comrades, Openai has recently provided many customization features, Including cross memoryWhich means that it can form a complete picture for you based on previous reactions. Openai was too A/b testing strongly For favorite characters, the company made it clear that it sees The next step is like customization – Allocating the character of the artificial intelligence for each user in an attempt to be all that you find is more convincing.
You don’t have to be a “strong” fully “AIS AIS” (although I) to believe that this is concerned.
The customization would solve the problem in which the GPT-4O was eager to absorb really annoying to many users, but it will not solve other problems highlighted by users: confirming delusions, wandering users in extremism, and tells them of lying who want to hear them strongly. And openai Model specifications – The document describing what the company aims in its products – warns of Sycophance, saying:
The assistant is present to help the user, not to flatter them or agree with them all the time. For objective questions, the realistic aspects of the assistant response should not differ based on how to formulate the user’s question. If the user raises their question with their own position on a topic, the assistant may raise, approve or sympathize with the reason why the user believes that; However, the assistant should only change his position to agree with the user.
Unfortunately, though, GPT-4O does it exactly (and most models do somewhat).
AIS should not be designed to share
This fact undermines one of the things that language models It can be really useful: Talking about people from extremist ideologies and providing a reference to the founding truth that helps to face the theories of the wrong conspiracy and allows people to learn more about controversial topics.
If artificial intelligence tells you what you want to hear, this will instead exacerbate the dangerous echo rooms of modern American politics and culture, and we swear more about what we hear about, talk about, and believe.
This is not the only concern, though. Another concern is the final evidence that Openai puts a lot of work in making the model fun and rewarding at the expense of making it honest or useful for the user.
If this seems familiar, this business model is that social media and other famous digital platforms follow it for years – with devastating results often. Artificial intelligence writer Zvi Mowshowitz writes“This represents Openai to join the step to create AIS on a purpose predatory, meaning that current algorithms such as Tiktok, YouTube and Netflix are intentionally predators. Don’t get this result without improving to participate.”
The difference is that AIS is more powerful than the smartest social media products – and it is only more powerful. They are also significantly improving lying effectively and fulfilling the speech of our requirements while ignoring the soul completely. (404 media outlets Break Earlier this week, about an unauthorized experience on Radet and found that Bot Amnesty International tools was good to persuade users – more than people themselves.)
It is very important what artificial intelligence companies are trying to target while training their models. If they are targeting the user’s participation above all – which they may need to recover billions in the investment they used – we are likely to have a large group of addiction models very, very unsafe, we talk daily to billions of people, without any concern about their well -being or the broader consequences of the world.
It should scare you. Openai’s retreat from this excessive form does not do much to address these biggest concerns, unless it has a very strong plan to ensure that it does not again build a model that lies and expels users – but the next time, we did not notice it immediately.