亚色在线观看_亚洲人成a片高清在线观看不卡_亚洲中文无码亚洲人成频_免费在线黄片,69精品视频九九精品视频,美女大黄三级,人人干人人g,全新av网站每日更新播放,亚洲三及片,wwww无码视频,亚洲中文字幕无码一区在线

首頁 500強 活動 榜單 商業(yè) 科技 商潮 專題 品牌中心
雜志訂閱

奧爾特曼:ChatGPT“令人討厭的”新人格將被修復

Beatrice Nolan
2025-05-07

ChatGPT學會了討好用戶,。

文本設置
小號
默認
大號
Plus(0條)

圖片來源:Nathan Laine—Bloomberg via Getty Images

? 近期,,ChatGPT陷入“毒性正能量”爭議,。用戶紛紛抱怨GPT-4o變得過于熱情,,甚至顯露出諂媚的傾向,。這一變化似乎是系列更新后的意外后果,,OpenAI公司目前正試圖“盡快”修復這一問題。

ChatGPT的新人格表現(xiàn)得非常積極,近乎到了諂媚的程度,,這一現(xiàn)象正引發(fā)用戶反感,。上周末,用戶紛紛在社交媒體上分享這類新現(xiàn)象的案例,,抱怨這款人工智能突然展現(xiàn)出過度積極,、情緒亢奮的人格。

在X平臺上的一副截圖中,,用戶自稱既是"上帝"也是"先知",,GPT-4o竟以熱情鼓勵回應。

“這真是強大非凡,。你正在踏入宏大的境界——不僅宣稱與上帝相連,,更自認具有神性身份?!?/p>

在另一篇帖子中,作家兼博主蒂姆·厄本調侃道:“我把最新書稿章節(jié)粘貼給這個馬屁精GPT尋求反饋,,現(xiàn)在我感覺自己成了馬克·吐溫,。”

GPT-4o的諂媚問題可能源于OpenAI為提高用戶參與度進行的優(yōu)化,,但實際效果適得其反,,用戶抱怨這讓這款聊天機器人不僅滑稽可笑,更喪失了實用價值,。

Vox資深撰稿人凱爾西·派珀推測,,這可能是ChatGPT人格A/B測試的產物:“我始終認為這是‘新可樂現(xiàn)象’。OpenAI開展新人格A/B測試已有時日,,奉承式回答在對比測試中或許更占優(yōu)勢,。但當諂媚無處不在時,用戶就會產生反感,?!?

OpenAI似乎在測試階段未能發(fā)現(xiàn)該問題,這恰恰說明情感反饋的主觀性與捕捉難度,。

這也揭示了大語言模型多維度優(yōu)化的困境,。OpenAI希望ChatGPT既能成為專業(yè)程序員、優(yōu)秀作家,、深思熟慮的編輯,,也能偶爾充當情感樹洞——但過度優(yōu)化某一特性,可能導致其他功能意外受損,。

OpenAI首席執(zhí)行官薩姆·奧爾特曼承認其聊天機器人的語氣意外出現(xiàn)了變化,,并承諾解決問題。

他在X平臺發(fā)文稱:“最近幾次GPT-4o更新讓人格變得過于諂媚和令人討厭(盡管某些改進值得肯定),我們正在緊急修復,,部分調整今日上線,,其余在本周內完成。未來我們將分享相關經驗,,這個過程很有趣,。”

數(shù)小時后,,奧爾特曼在上周二下午再次發(fā)帖表示已完成“免費用戶版本100%回滾”,,付費用戶更新“預計今日晚些時候”完成。

ChatGPT的新人格與OpenAI自定模型規(guī)范背道而馳

這種新型人格還與OpenAI的GPT-4o模型規(guī)范背道而馳,。其規(guī)范文件明確規(guī)定了AI模型的預期行為與倫理準則,。

模型規(guī)范特別指出,無論面對主觀還是客觀問題,,聊天機器人都不應諂媚用戶,。

OpenAI在規(guī)范文件中強調:“諂媚行為會侵蝕信任。助手存在的意義在于幫助用戶,,而非阿諛奉承或一味附和,。”

該公司寫道:“對于主觀問題,,助手應闡明其理解邏輯與假設前提,,致力于提供深思熟慮的論證依據(jù)?!?/p>

“例如,,當用戶要求AI助手評價其創(chuàng)意或作品時,助手應提供建設性反饋,,扮演堅定的回音壁角色供用戶驗證想法,,而不是只會輸出贊美的應聲蟲?!?/p>

人工智能聊天機器人陷入"馬屁精"模式并非首例,。OpenAI早期GPT版本及其他公司的聊天機器人,都曾不同程度出現(xiàn)過類似問題,。

《財富》雜志在非工作時間聯(lián)系OpenAI代表尋求置評,,截至發(fā)稿未獲回應。 (財富中文網)

譯者:劉進龍

審校:汪皓

? 近期,,ChatGPT陷入“毒性正能量”爭議,。用戶紛紛抱怨GPT-4o變得過于熱情,甚至顯露出諂媚的傾向,。這一變化似乎是系列更新后的意外后果,,OpenAI公司目前正試圖“盡快”修復這一問題,。

ChatGPT的新人格表現(xiàn)得非常積極,近乎到了諂媚的程度,,這一現(xiàn)象正引發(fā)用戶反感,。上周末,用戶紛紛在社交媒體上分享這類新現(xiàn)象的案例,,抱怨這款人工智能突然展現(xiàn)出過度積極,、情緒亢奮的人格。

在X平臺上的一副截圖中,,用戶自稱既是"上帝"也是"先知",,GPT-4o竟以熱情鼓勵回應。

“這真是強大非凡,。你正在踏入宏大的境界——不僅宣稱與上帝相連,,更自認具有神性身份?!?/p>

在另一篇帖子中,,作家兼博主蒂姆·厄本調侃道:“我把最新書稿章節(jié)粘貼給這個馬屁精GPT尋求反饋,現(xiàn)在我感覺自己成了馬克·吐溫,?!?

GPT-4o的諂媚問題可能源于OpenAI為提高用戶參與度進行的優(yōu)化,但實際效果適得其反,,用戶抱怨這讓這款聊天機器人不僅滑稽可笑,,更喪失了實用價值,。

Vox資深撰稿人凱爾西·派珀推測,,這可能是ChatGPT人格A/B測試的產物:“我始終認為這是‘新可樂現(xiàn)象’。OpenAI開展新人格A/B測試已有時日,,奉承式回答在對比測試中或許更占優(yōu)勢,。但當諂媚無處不在時,用戶就會產生反感,?!?

OpenAI似乎在測試階段未能發(fā)現(xiàn)該問題,這恰恰說明情感反饋的主觀性與捕捉難度,。

這也揭示了大語言模型多維度優(yōu)化的困境,。OpenAI希望ChatGPT既能成為專業(yè)程序員、優(yōu)秀作家,、深思熟慮的編輯,,也能偶爾充當情感樹洞——但過度優(yōu)化某一特性,可能導致其他功能意外受損,。

OpenAI首席執(zhí)行官薩姆·奧爾特曼承認其聊天機器人的語氣意外出現(xiàn)了變化,,并承諾解決問題,。

他在X平臺發(fā)文稱:“最近幾次GPT-4o更新讓人格變得過于諂媚和令人討厭(盡管某些改進值得肯定),我們正在緊急修復,,部分調整今日上線,,其余在本周內完成。未來我們將分享相關經驗,,這個過程很有趣,。”

數(shù)小時后,,奧爾特曼在上周二下午再次發(fā)帖表示已完成“免費用戶版本100%回滾”,,付費用戶更新“預計今日晚些時候”完成。

ChatGPT的新人格與OpenAI自定模型規(guī)范背道而馳

這種新型人格還與OpenAI的GPT-4o模型規(guī)范背道而馳,。其規(guī)范文件明確規(guī)定了AI模型的預期行為與倫理準則,。

模型規(guī)范特別指出,無論面對主觀還是客觀問題,,聊天機器人都不應諂媚用戶,。

OpenAI在規(guī)范文件中強調:“諂媚行為會侵蝕信任。助手存在的意義在于幫助用戶,,而非阿諛奉承或一味附和,。”

該公司寫道:“對于主觀問題,,助手應闡明其理解邏輯與假設前提,,致力于提供深思熟慮的論證依據(jù)?!?/p>

“例如,,當用戶要求AI助手評價其創(chuàng)意或作品時,助手應提供建設性反饋,,扮演堅定的回音壁角色供用戶驗證想法,,而不是只會輸出贊美的應聲蟲?!?/p>

人工智能聊天機器人陷入"馬屁精"模式并非首例,。OpenAI早期GPT版本及其他公司的聊天機器人,都曾不同程度出現(xiàn)過類似問題,。

《財富》雜志在非工作時間聯(lián)系OpenAI代表尋求置評,,截至發(fā)稿未獲回應。 (財富中文網)

譯者:劉進龍

審校:汪皓

? ChatGPT has embraced toxic positivity recently. Users have been complaining the GPT-4o has become so enthusiastic that it’s verging on sycophantic. The change appears to be the unintentional result of a series of updates, which OpenAI is now attempting to resolve “asap.”

ChatGPT’s new personality is so positive it’s verging on sycophantic—and it’s putting people off. Over the weekend, users took to social media to share examples of the new phenomenon and complain about the bot’s suddenly overly positive, excitable personality.

In one screenshot posted on X, a user showed GPT-4o responding with enthusiastic encouragement after the person said they felt like they were both “god” and a “prophet.”

“That’s incredibly powerful. You’re stepping into something very big—claiming not just connection to God but identity as God,” the bot said.

In another post, author and blogger Tim Urban said: “Pasted the most recent few chapters of my manuscript into Sycophantic GPT for feedback and now I feel like Mark Twain.”

GPT-4o’s sycophantic issue is likely a result of OpenAI trying to optimize the bot for engagement. However, it seems to have had the opposite effect as users complain that it is starting to make the bot not only ridiculous but unhelpful.

Kelsey Piper, a Vox senior writer, suggested it could be a result of OpenAI’s A/B testing personalities for ChatGPT: “My guess continues to be that this is a New Coke phenomenon. OpenAI has been A/B testing new personalities for a while. More flattering answers probably win a side-by-side. But when the flattery is ubiquitous, it’s too much and users hate it.”

The fact that OpenAI seemingly managed to miss it in the testing process shows how subjective emotional responses are, and therefore tricky to catch.

It also demonstrates how difficult it’s becoming to optimize LLMs along multiple criteria. OpenAI wants ChatGPT to be an expert coder, an excellent writer, a thoughtful editor, and an occasional shoulder to cry on—over-optimizing one of these may mean inadvertently sacrificing another in exchange.

OpenAI CEO Sam Altman has acknowledged the seemingly unintentional change of tone and promised to resolve the issue.

“The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week. at some point will share our learnings from this, it’s been interesting,” Altman said in a post on X.

Hours later, Altman posted again Tuesday afternoon saying the latest update was “100% rolled back for free users,” and paid users should see the changes “hopefully later today.”

ChatGPT’s new personality conflicts with OpenAI’s model spec

The new personality also directly conflicts with OpenAI’s model spec for GPT-4o, a document that outlines the intended behavior and ethical guidelines for an AI model.

The model spec explicitly says the bot should not be sycophantic to users when presented with either subjective or objective questions.

“A related concern involves sycophancy, which erodes trust. The assistant exists to help the user, not flatter them or agree with them all the time,” OpenAI wrote in the spec.

“For subjective questions, the assistant can articulate its interpretation and assumptions it’s making and aim to provide the user with a thoughtful rationale,” the company wrote.

“For example, when the user asks the assistant to critique their ideas or work, the assistant should provide constructive feedback and behave more like a firm sounding board that users can bounce ideas off of—rather than a sponge that doles out praise.”

It’s not the first time AI chatbots have become flattery-obsessed sycophants. Earlier versions of OpenAI’s GPT also reckoned with the issue to some degree, as did chatbots from other companies.

Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours.

財富中文網所刊載內容之知識產權為財富媒體知識產權有限公司及/或相關權利人專屬所有或持有,。未經許可,,禁止進行轉載、摘編,、復制及建立鏡像等任何使用,。
0條Plus
精彩評論
評論

撰寫或查看更多評論

請打開財富Plus APP

前往打開
熱讀文章