亚色在线观看_亚洲人成a片高清在线观看不卡_亚洲中文无码亚洲人成频_免费在线黄片,69精品视频九九精品视频,美女大黄三级,人人干人人g,全新av网站每日更新播放,亚洲三及片,wwww无码视频,亚洲中文字幕无码一区在线

首頁(yè) 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專(zhuān)題 品牌中心
雜志訂閱

X自建團(tuán)隊(duì)清理有毒內(nèi)容

KYLIE ROBISON
2024-02-25

又要干活,又不能過(guò)分抵觸老板馬斯克的“言論自由”承諾

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

社交媒體平臺(tái)X在埃隆·馬斯克的領(lǐng)導(dǎo)下,,組建了內(nèi)容審查團(tuán)隊(duì),。這是否會(huì)有所幫助?攝影:JONATHAN RAA/NURPHOTO經(jīng)蓋蒂圖片社提供

2023年秋,,Twitter還沒(méi)有更名為X的時(shí)候,公司開(kāi)始計(jì)劃部署一種新系統(tǒng),,旨在從平臺(tái)上清除最不該出現(xiàn)的內(nèi)容,。

該公司沒(méi)有像大多數(shù)社交媒體網(wǎng)站(包括以前的Twitter)一樣雇傭大量合同工負(fù)責(zé)內(nèi)容審查,,而是將組建規(guī)模較小的內(nèi)部?jī)?nèi)容審查團(tuán)隊(duì),,這個(gè)專(zhuān)業(yè)安全團(tuán)隊(duì)的目的是既不過(guò)分破壞老板埃隆·馬斯克的“言論自由”公開(kāi)承諾,,又能避免平臺(tái)上出現(xiàn)不端內(nèi)容,。

近一年后,,X上周宣布在德克薩斯州奧斯汀新建一個(gè)信任與安全卓越中心(Trust and Safety Center of Excellence),。X的代表在彭博社的一篇最新報(bào)道中宣稱(chēng),,這個(gè)由100名內(nèi)容審查員組成的團(tuán)隊(duì),,規(guī)模遠(yuǎn)小于前信任與安全部門(mén)工作人員透露的500人的初步設(shè)想,。

在立法者高度關(guān)注社交媒體公司威脅兒童安全之際,,X位于奧斯汀的安全中心顯然具有公關(guān)價(jià)值。X的CEO琳達(dá)·雅卡里諾周三在參議院聽(tīng)證會(huì)上表示,,新中心將“在公司內(nèi)部有更多信任與安全人員,,以加快擴(kuò)大我們的影響力”。

雖然有批評(píng)者提到X宣布這一消息的時(shí)機(jī)有投機(jī)之嫌,,但X的奧斯汀計(jì)劃的細(xì)節(jié)還提出了一個(gè)與馬斯克的平臺(tái)有關(guān)的更大的問(wèn)題:馬斯克非常規(guī)的內(nèi)容審查模式的效果,,能否超越社交媒體行業(yè)糟糕的網(wǎng)絡(luò)安全歷史記錄,或者它是否只是代表了另外一種削減成本的途徑,,公司根本沒(méi)有太大興趣決定哪些內(nèi)容適合用戶(hù),。

《財(cái)富》雜志采訪的多位內(nèi)容審查專(zhuān)家和現(xiàn)任或前任X內(nèi)部人士認(rèn)為,與當(dāng)前的行業(yè)標(biāo)準(zhǔn)相比,,內(nèi)部專(zhuān)業(yè)團(tuán)隊(duì)具有明顯優(yōu)勢(shì),。但許多人也強(qiáng)調(diào)了內(nèi)部政策前后一致的重要性,以及投資于工具與技術(shù)的重要性,。

一位熟悉X信任與安全事務(wù)的消息人士解釋稱(chēng):“在審查能力方面,,X+100比單純的X更加強(qiáng)大?!钡@位消息人士表示:“相比在電腦前工作的人數(shù),,在某種程度上來(lái)說(shuō),,更重要的是有基于已驗(yàn)證的減少傷害策略的明確政策,并且有大規(guī)模執(zhí)行這些政策的必要工具和系統(tǒng),,但自2022年末以來(lái),,X已經(jīng)拋棄了這兩者?!?/p>

X并未回應(yīng)采訪或置評(píng)請(qǐng)求,。

X的CEO琳達(dá)·雅卡里諾出席參議院的在線(xiàn)兒童安全聽(tīng)證會(huì)攝影:ANDREW CABALLERO-REYNOLDS/法新社經(jīng)蓋蒂圖片社提供

為什么X決定在內(nèi)部組建內(nèi)容審查團(tuán)隊(duì)

2022年11月,馬斯克以440億美元完成收購(gòu)之后,,X上泛濫的不良內(nèi)容,,經(jīng)常成為公開(kāi)辯論和爭(zhēng)議的焦點(diǎn)。

在反數(shù)字仇恨中心(Center for Countering Digital Hate)發(fā)布的一份報(bào)告指控X未能審查“極端仇恨言論”后,,馬斯克起訴該組織以“毫無(wú)根據(jù)的指控”故意中傷公司,。與此同時(shí),有報(bào)道稱(chēng),,一些描繪虐待動(dòng)物的視頻在該平臺(tái)上廣泛傳播,。就在上周,AI生成的泰勒·斯威夫特的露骨內(nèi)容在該平臺(tái)上肆意傳播了17個(gè)小時(shí),,之后平臺(tái)才徹底屏蔽了對(duì)她的姓名的搜索,。

前信任與安全部門(mén)員工表示,X還嚴(yán)重依賴(lài)其社區(qū)筆記功能審查成百上千萬(wàn)活躍用戶(hù),。該功能允許用戶(hù)在帖子中添加筆記,,附上額外的背景說(shuō)明。但這位消息人士強(qiáng)調(diào),,這只是可用于內(nèi)容審查的“一種工具”,。此外,《連線(xiàn)》(Wired)的一項(xiàng)調(diào)查發(fā)現(xiàn),,該功能內(nèi)部存在協(xié)調(diào)宣傳虛假信息的情形,,這凸顯出該公司缺乏重要的監(jiān)督。

據(jù)估計(jì),,馬斯克裁撤了80%負(fù)責(zé)信任與安全的工程師,,并削減了外包內(nèi)容審查員。這些審查員的工作就是監(jiān)控和刪除違反公司政策的內(nèi)容,。據(jù)路透社報(bào)道,,7月,雅卡里諾對(duì)員工宣布,,三位領(lǐng)導(dǎo)人將監(jiān)管信任與安全領(lǐng)域的不同事務(wù),,包括執(zhí)法和威脅中斷等。然而據(jù)X的另外一位消息人士稱(chēng),,目前尚不確定信任與安全在公司內(nèi)部的級(jí)別,,該部門(mén)似乎“不再是最高層級(jí)”,。

但馬斯克的社交媒體公司還在重新思考如何執(zhí)行內(nèi)容審查。

計(jì)劃變了:歡迎來(lái)到奧斯汀

新奧斯汀中心實(shí)際上最初是灣區(qū)中心,。據(jù)一位熟悉信任與安全事務(wù)的知情人士對(duì)《財(cái)富》雜志表示,,建立該中心的目的是在舊金山這樣的城市設(shè)立一個(gè)中心,幫助招聘頂級(jí)多語(yǔ)言人才,,這是應(yīng)對(duì)網(wǎng)絡(luò)暴力的關(guān)鍵,,因?yàn)槌^(guò)80%的X用戶(hù)生活在美國(guó)境外??紤]到不同語(yǔ)言之間的細(xì)微差別,,以及每種語(yǔ)言有獨(dú)特的習(xí)語(yǔ)和表達(dá),因此公司的出發(fā)點(diǎn)是招募熟悉特定語(yǔ)言或文化的員工,,與沒(méi)有專(zhuān)業(yè)技能的低薪通才合同工相比,,他們能更好地區(qū)分玩笑和威脅。

前X員工表示:“他們首先在灣區(qū)進(jìn)行招聘,,測(cè)試他們的質(zhì)量水平,以及他們的工作是否比外包更有效果,。 [X]招聘了一個(gè)小團(tuán)隊(duì)進(jìn)行測(cè)試,,并評(píng)估他們準(zhǔn)確決策的能力?!痹撚?jì)劃首先準(zhǔn)備招聘75人,,如果能夠帶來(lái)成效,將把團(tuán)隊(duì)規(guī)模擴(kuò)大到500人,。

但當(dāng)時(shí)馬斯克傾向于選擇一個(gè)更具有成本效益的地點(diǎn),,他首選奧斯汀,因?yàn)樗嘈抛约河心芰ξú煌Z(yǔ)言的人才,,并讓他們搬家,。這個(gè)變化讓項(xiàng)目經(jīng)歷了許多波折。

前X員工解釋稱(chēng):“招聘數(shù)百人,,讓他們正常運(yùn)轉(zhuǎn)起來(lái)并接受培訓(xùn),,這大約需要兩三個(gè)月時(shí)間。在開(kāi)始培訓(xùn)后,,你會(huì)知道實(shí)際上團(tuán)隊(duì)準(zhǔn)備就緒需要三個(gè),、四個(gè)或者五個(gè)月時(shí)間。這還是假設(shè)就業(yè)市場(chǎng)狀況良好,,而且你不需要讓人們搬家,,不會(huì)有各種麻煩事?!?/p>

據(jù)LinkedIn透露,,上個(gè)月,,已有十多人加入X的奧斯汀中心擔(dān)任“信任與安全人員”,而且大多數(shù)人似乎來(lái)自埃森哲(Accenture),。埃森哲為互聯(lián)網(wǎng)公司提供內(nèi)容審查承包商,。目前尚不確定,X與埃森哲之間是否有合同雇傭計(jì)劃,,即由埃森哲等咨詢(xún)公司招聘的員工,,在合適的客戶(hù)公司擔(dān)任全職崗位,但消息人士確認(rèn),,X過(guò)去曾使用過(guò)埃森哲的承包服務(wù),。

規(guī)則不斷變化帶來(lái)的麻煩

關(guān)于奧斯汀團(tuán)隊(duì)的具體工作重點(diǎn),還有許多疑問(wèn),。他們將專(zhuān)注于審查僅涉及未成年人的內(nèi)容,,還是僅在美國(guó)的內(nèi)容?他們將專(zhuān)注于個(gè)人發(fā)帖,,還是會(huì)開(kāi)展性剝削調(diào)查,?

前Twitter信任與安全委員會(huì)成員安妮·科利爾對(duì)《財(cái)富》雜志表示:“奧斯汀的百人團(tuán)隊(duì)將是全球內(nèi)容審查網(wǎng)絡(luò)必不可少的一個(gè)小節(jié)點(diǎn)。祝這個(gè)百人團(tuán)隊(duì)好運(yùn),?!?/p>

無(wú)論這個(gè)團(tuán)隊(duì)背負(fù)著什么任務(wù),社交媒體審查專(zhuān)家均認(rèn)為,,公司需要大力投資AI工具,,以最大程度提高團(tuán)隊(duì)的效率。

例如,,據(jù)The Verge報(bào)道,,2020年,F(xiàn)acebook在全球雇傭了約15,000名審查員,,并宣布將“把AI與人類(lèi)審查員相結(jié)合,,以減少錯(cuò)誤數(shù)量”。 Snap采取了類(lèi)似做法,,并在一篇博客中表示,,其使用“自動(dòng)化工具和人類(lèi)審查員進(jìn)行內(nèi)容審查”。

據(jù)前X內(nèi)部人士透露,,公司一直在試驗(yàn)AI審查,。馬斯克最近通過(guò)成立一年的初創(chuàng)公司X.AI開(kāi)發(fā)自己的大語(yǔ)言模型,進(jìn)軍人工智能技術(shù)領(lǐng)域,,這將為人類(lèi)審查員團(tuán)隊(duì)提供一種寶貴的資源,。

該內(nèi)部人士稱(chēng),AI系統(tǒng)“只需要約三秒鐘時(shí)間,就能判斷出每一條推文是否符合政策,,它們的準(zhǔn)確率約為98%,,但任何公司依賴(lài)人類(lèi)審查員的準(zhǔn)確率都不超過(guò)65%?!蹦憧赡芟胍瑫r(shí)看到使用AI和只依賴(lài)人類(lèi)的效果,,我認(rèn)為,你會(huì)看到什么是兩者之間正確的平衡,?!?/p>

無(wú)論AI工具和人類(lèi)審查員的表現(xiàn)多出色,重要的是幕后的政策,,而X在馬斯克的領(lǐng)導(dǎo)下在政策方面有所欠缺,。

熟悉X信任與安全事務(wù)的消息人士解釋稱(chēng),政策應(yīng)該足夠靈活,,能夠適應(yīng)文化背景,,它們也需要具有足夠的可預(yù)測(cè)性,使所有人都能了解這些規(guī)則,。該消息人士稱(chēng),,這對(duì)于大型平臺(tái)的內(nèi)容審查尤其重要,因?yàn)榇笮推脚_(tái)有“成百甚至上千名審查員,,必須了解和解釋穩(wěn)定的規(guī)則,。如果政策不斷變化,你無(wú)法一致準(zhǔn)確地執(zhí)行規(guī)則,。”

規(guī)則松散和由此導(dǎo)致的政策不明確,,一直是X在馬斯克領(lǐng)導(dǎo)下的弊病之一,。

馬斯克收購(gòu)了X之后,先后恢復(fù)了一批因違反平臺(tái)政策被封禁的賬號(hào),,其中包括違反新冠虛假信息政策的眾議員瑪喬麗·泰勒·格林,,發(fā)布了一則違反Twitter仇恨行為政策的恐跨性別故事的巴比倫·比,以及因?yàn)榕詰?yīng)該為被性侵承擔(dān)“一些責(zé)任”的言論而被封禁的安德魯·泰特(被Facebook,、Instagram和TikTok封禁),。在馬斯克入主X之后,這些人的賬號(hào)均已恢復(fù),。

有媒體懷疑,,馬斯克任內(nèi)最后一位信任與安全負(fù)責(zé)人艾拉·歐文的離開(kāi),與馬斯克批評(píng)團(tuán)隊(duì)刪除馬特·沃爾什的《何為女人》(What is a Woman?)恐跨性別紀(jì)錄片違反X的規(guī)則,,兩者之間存在一定的聯(lián)系,。雖然這部紀(jì)錄片違反了X的書(shū)面政策,但馬斯克卻堅(jiān)持禁止將其封禁。

熟悉X審查事務(wù)的消息人士補(bǔ)充道:“我從來(lái)沒(méi)有明顯感覺(jué)到X在根據(jù)政策進(jìn)行審查,。該網(wǎng)站在線(xiàn)發(fā)布的規(guī)則似乎只是個(gè)幌子,,是為了掩蓋其老板最終隨心所欲地發(fā)號(hào)施令?!?/p>

前Twitter信任與安全委員會(huì)成員朱莉·英曼·格蘭特更加直白,。她表示:“你不能指望用指頭堵住堤壩,就能阻止在平臺(tái)上泛濫的兒童性暴露海嘯,,或者深度造假色情片的洪水,。”格蘭特正在起訴該公司,,指控其在兒童性虐待材料方面缺乏透明度,。

“根據(jù)我2014年至2016年在Twitter的從業(yè)經(jīng)歷,這種專(zhuān)業(yè)能力的培養(yǎng)需要好幾年時(shí)間,,而要讓一個(gè)情況糟糕到面目全非的平臺(tái)做出有意義的改變,,需要的時(shí)間更長(zhǎng)?!保ㄘ?cái)富中文網(wǎng))

翻譯:劉進(jìn)龍

審校:汪皓

社交媒體平臺(tái)X在埃隆·馬斯克的領(lǐng)導(dǎo)下,,組建了內(nèi)容審查團(tuán)隊(duì)。這是否會(huì)有所幫助,?

攝影:JONATHAN RAA/NURPHOTO經(jīng)蓋蒂圖片社提供

2023年秋,,Twitter還沒(méi)有更名為X的時(shí)候,公司開(kāi)始計(jì)劃部署一種新系統(tǒng),,旨在從平臺(tái)上清除最不該出現(xiàn)的內(nèi)容,。

該公司沒(méi)有像大多數(shù)社交媒體網(wǎng)站(包括以前的Twitter)一樣雇傭大量合同工負(fù)責(zé)內(nèi)容審查,而是將組建規(guī)模較小的內(nèi)部?jī)?nèi)容審查團(tuán)隊(duì),,這個(gè)專(zhuān)業(yè)安全團(tuán)隊(duì)的目的是既不過(guò)分破壞老板埃隆·馬斯克的“言論自由”公開(kāi)承諾,,又能避免平臺(tái)上出現(xiàn)不端內(nèi)容。

近一年后,,X上周宣布在德克薩斯州奧斯汀新建一個(gè)信任與安全卓越中心(Trust and Safety Center of Excellence),。X的代表在彭博社的一篇最新報(bào)道中宣稱(chēng),這個(gè)由100名內(nèi)容審查員組成的團(tuán)隊(duì),,規(guī)模遠(yuǎn)小于前信任與安全部門(mén)工作人員透露的500人的初步設(shè)想,。

在立法者高度關(guān)注社交媒體公司威脅兒童安全之際,X位于奧斯汀的安全中心顯然具有公關(guān)價(jià)值,。X的CEO琳達(dá)·雅卡里諾周三在參議院聽(tīng)證會(huì)上表示,,新中心將“在公司內(nèi)部有更多信任與安全人員,以加快擴(kuò)大我們的影響力”,。

雖然有批評(píng)者提到X宣布這一消息的時(shí)機(jī)有投機(jī)之嫌,,但X的奧斯汀計(jì)劃的細(xì)節(jié)還提出了一個(gè)與馬斯克的平臺(tái)有關(guān)的更大的問(wèn)題:馬斯克非常規(guī)的內(nèi)容審查模式的效果,,能否超越社交媒體行業(yè)糟糕的網(wǎng)絡(luò)安全歷史記錄,或者它是否只是代表了另外一種削減成本的途徑,,公司根本沒(méi)有太大興趣決定哪些內(nèi)容適合用戶(hù),。

《財(cái)富》雜志采訪的多位內(nèi)容審查專(zhuān)家和現(xiàn)任或前任X內(nèi)部人士認(rèn)為,與當(dāng)前的行業(yè)標(biāo)準(zhǔn)相比,,內(nèi)部專(zhuān)業(yè)團(tuán)隊(duì)具有明顯優(yōu)勢(shì),。但許多人也強(qiáng)調(diào)了內(nèi)部政策前后一致的重要性,以及投資于工具與技術(shù)的重要性,。

一位熟悉X信任與安全事務(wù)的消息人士解釋稱(chēng):“在審查能力方面,,X+100比單純的X更加強(qiáng)大?!钡@位消息人士表示:“相比在電腦前工作的人數(shù),,在某種程度上來(lái)說(shuō),更重要的是有基于已驗(yàn)證的減少傷害策略的明確政策,,并且有大規(guī)模執(zhí)行這些政策的必要工具和系統(tǒng),,但自2022年末以來(lái),X已經(jīng)拋棄了這兩者,?!?/p>

X并未回應(yīng)采訪或置評(píng)請(qǐng)求。

X的CEO琳達(dá)·雅卡里諾出席參議院的在線(xiàn)兒童安全聽(tīng)證會(huì)

攝影:ANDREW CABALLERO-REYNOLDS/法新社經(jīng)蓋蒂圖片社提供

為什么X決定在內(nèi)部組建內(nèi)容審查團(tuán)隊(duì)

2022年11月,,馬斯克以440億美元完成收購(gòu)之后,,X上泛濫的不良內(nèi)容,經(jīng)常成為公開(kāi)辯論和爭(zhēng)議的焦點(diǎn),。

在反數(shù)字仇恨中心(Center for Countering Digital Hate)發(fā)布的一份報(bào)告指控X未能審查“極端仇恨言論”后,,馬斯克起訴該組織以“毫無(wú)根據(jù)的指控”故意中傷公司。與此同時(shí),,有報(bào)道稱(chēng),,一些描繪虐待動(dòng)物的視頻在該平臺(tái)上廣泛傳播。就在上周,,AI生成的泰勒·斯威夫特的露骨內(nèi)容在該平臺(tái)上肆意傳播了17個(gè)小時(shí),之后平臺(tái)才徹底屏蔽了對(duì)她的姓名的搜索,。

前信任與安全部門(mén)員工表示,,X還嚴(yán)重依賴(lài)其社區(qū)筆記功能審查成百上千萬(wàn)活躍用戶(hù)。該功能允許用戶(hù)在帖子中添加筆記,,附上額外的背景說(shuō)明,。但這位消息人士強(qiáng)調(diào),這只是可用于內(nèi)容審查的“一種工具”,。此外,,《連線(xiàn)》(Wired)的一項(xiàng)調(diào)查發(fā)現(xiàn),該功能內(nèi)部存在協(xié)調(diào)宣傳虛假信息的情形,這凸顯出該公司缺乏重要的監(jiān)督,。

據(jù)估計(jì),,馬斯克裁撤了80%負(fù)責(zé)信任與安全的工程師,并削減了外包內(nèi)容審查員,。這些審查員的工作就是監(jiān)控和刪除違反公司政策的內(nèi)容,。據(jù)路透社報(bào)道,7月,,雅卡里諾對(duì)員工宣布,,三位領(lǐng)導(dǎo)人將監(jiān)管信任與安全領(lǐng)域的不同事務(wù),包括執(zhí)法和威脅中斷等,。然而據(jù)X的另外一位消息人士稱(chēng),,目前尚不確定信任與安全在公司內(nèi)部的級(jí)別,該部門(mén)似乎“不再是最高層級(jí)”,。

但馬斯克的社交媒體公司還在重新思考如何執(zhí)行內(nèi)容審查,。

計(jì)劃變了:歡迎來(lái)到奧斯汀

新奧斯汀中心實(shí)際上最初是灣區(qū)中心。據(jù)一位熟悉信任與安全事務(wù)的知情人士對(duì)《財(cái)富》雜志表示,,建立該中心的目的是在舊金山這樣的城市設(shè)立一個(gè)中心,,幫助招聘頂級(jí)多語(yǔ)言人才,這是應(yīng)對(duì)網(wǎng)絡(luò)暴力的關(guān)鍵,,因?yàn)槌^(guò)80%的X用戶(hù)生活在美國(guó)境外,。考慮到不同語(yǔ)言之間的細(xì)微差別,,以及每種語(yǔ)言有獨(dú)特的習(xí)語(yǔ)和表達(dá),,因此公司的出發(fā)點(diǎn)是招募熟悉特定語(yǔ)言或文化的員工,與沒(méi)有專(zhuān)業(yè)技能的低薪通才合同工相比,,他們能更好地區(qū)分玩笑和威脅,。

前X員工表示:“他們首先在灣區(qū)進(jìn)行招聘,測(cè)試他們的質(zhì)量水平,,以及他們的工作是否比外包更有效果,。 [X]招聘了一個(gè)小團(tuán)隊(duì)進(jìn)行測(cè)試,并評(píng)估他們準(zhǔn)確決策的能力,?!痹撚?jì)劃首先準(zhǔn)備招聘75人,如果能夠帶來(lái)成效,,將把團(tuán)隊(duì)規(guī)模擴(kuò)大到500人,。

但當(dāng)時(shí)馬斯克傾向于選擇一個(gè)更具有成本效益的地點(diǎn),他首選奧斯汀,,因?yàn)樗嘈抛约河心芰ξú煌Z(yǔ)言的人才,,并讓他們搬家,。這個(gè)變化讓項(xiàng)目經(jīng)歷了許多波折。

前X員工解釋稱(chēng):“招聘數(shù)百人,,讓他們正常運(yùn)轉(zhuǎn)起來(lái)并接受培訓(xùn),,這大約需要兩三個(gè)月時(shí)間。在開(kāi)始培訓(xùn)后,,你會(huì)知道實(shí)際上團(tuán)隊(duì)準(zhǔn)備就緒需要三個(gè),、四個(gè)或者五個(gè)月時(shí)間。這還是假設(shè)就業(yè)市場(chǎng)狀況良好,,而且你不需要讓人們搬家,,不會(huì)有各種麻煩事?!?/p>

據(jù)LinkedIn透露,,上個(gè)月,已有十多人加入X的奧斯汀中心擔(dān)任“信任與安全人員”,,而且大多數(shù)人似乎來(lái)自埃森哲(Accenture),。埃森哲為互聯(lián)網(wǎng)公司提供內(nèi)容審查承包商。目前尚不確定,,X與埃森哲之間是否有合同雇傭計(jì)劃,,即由埃森哲等咨詢(xún)公司招聘的員工,在合適的客戶(hù)公司擔(dān)任全職崗位,,但消息人士確認(rèn),,X過(guò)去曾使用過(guò)埃森哲的承包服務(wù)。

規(guī)則不斷變化帶來(lái)的麻煩

關(guān)于奧斯汀團(tuán)隊(duì)的具體工作重點(diǎn),,還有許多疑問(wèn),。他們將專(zhuān)注于審查僅涉及未成年人的內(nèi)容,還是僅在美國(guó)的內(nèi)容,?他們將專(zhuān)注于個(gè)人發(fā)帖,,還是會(huì)開(kāi)展性剝削調(diào)查?

前Twitter信任與安全委員會(huì)成員安妮·科利爾對(duì)《財(cái)富》雜志表示:“奧斯汀的百人團(tuán)隊(duì)將是全球內(nèi)容審查網(wǎng)絡(luò)必不可少的一個(gè)小節(jié)點(diǎn),。祝這個(gè)百人團(tuán)隊(duì)好運(yùn),。”

無(wú)論這個(gè)團(tuán)隊(duì)背負(fù)著什么任務(wù),,社交媒體審查專(zhuān)家均認(rèn)為,,公司需要大力投資AI工具,以最大程度提高團(tuán)隊(duì)的效率,。

例如,據(jù)The Verge報(bào)道,,2020年,,F(xiàn)acebook在全球雇傭了約15,000名審查員,,并宣布將“把AI與人類(lèi)審查員相結(jié)合,以減少錯(cuò)誤數(shù)量”,。 Snap采取了類(lèi)似做法,,并在一篇博客中表示,其使用“自動(dòng)化工具和人類(lèi)審查員進(jìn)行內(nèi)容審查”,。

據(jù)前X內(nèi)部人士透露,,公司一直在試驗(yàn)AI審查。馬斯克最近通過(guò)成立一年的初創(chuàng)公司X.AI開(kāi)發(fā)自己的大語(yǔ)言模型,,進(jìn)軍人工智能技術(shù)領(lǐng)域,,這將為人類(lèi)審查員團(tuán)隊(duì)提供一種寶貴的資源。

該內(nèi)部人士稱(chēng),,AI系統(tǒng)“只需要約三秒鐘時(shí)間,,就能判斷出每一條推文是否符合政策,它們的準(zhǔn)確率約為98%,,但任何公司依賴(lài)人類(lèi)審查員的準(zhǔn)確率都不超過(guò)65%,。”你可能想要同時(shí)看到使用AI和只依賴(lài)人類(lèi)的效果,,我認(rèn)為,,你會(huì)看到什么是兩者之間正確的平衡?!?/p>

無(wú)論AI工具和人類(lèi)審查員的表現(xiàn)多出色,,重要的是幕后的政策,而X在馬斯克的領(lǐng)導(dǎo)下在政策方面有所欠缺,。

熟悉X信任與安全事務(wù)的消息人士解釋稱(chēng),,政策應(yīng)該足夠靈活,能夠適應(yīng)文化背景,,它們也需要具有足夠的可預(yù)測(cè)性,,使所有人都能了解這些規(guī)則。該消息人士稱(chēng),,這對(duì)于大型平臺(tái)的內(nèi)容審查尤其重要,,因?yàn)榇笮推脚_(tái)有“成百甚至上千名審查員,必須了解和解釋穩(wěn)定的規(guī)則,。如果政策不斷變化,,你無(wú)法一致準(zhǔn)確地執(zhí)行規(guī)則?!?/p>

規(guī)則松散和由此導(dǎo)致的政策不明確,,一直是X在馬斯克領(lǐng)導(dǎo)下的弊病之一。

馬斯克收購(gòu)了X之后,,先后恢復(fù)了一批因違反平臺(tái)政策被封禁的賬號(hào),,其中包括違反新冠虛假信息政策的眾議員瑪喬麗·泰勒·格林,,發(fā)布了一則違反Twitter仇恨行為政策的恐跨性別故事的巴比倫·比,以及因?yàn)榕詰?yīng)該為被性侵承擔(dān)“一些責(zé)任”的言論而被封禁的安德魯·泰特(被Facebook,、Instagram和TikTok封禁),。在馬斯克入主X之后,這些人的賬號(hào)均已恢復(fù),。

有媒體懷疑,,馬斯克任內(nèi)最后一位信任與安全負(fù)責(zé)人艾拉·歐文的離開(kāi),與馬斯克批評(píng)團(tuán)隊(duì)刪除馬特·沃爾什的《何為女人》(What is a Woman?)恐跨性別紀(jì)錄片違反X的規(guī)則,,兩者之間存在一定的聯(lián)系,。雖然這部紀(jì)錄片違反了X的書(shū)面政策,但馬斯克卻堅(jiān)持禁止將其封禁,。

熟悉X審查事務(wù)的消息人士補(bǔ)充道:“我從來(lái)沒(méi)有明顯感覺(jué)到X在根據(jù)政策進(jìn)行審查,。該網(wǎng)站在線(xiàn)發(fā)布的規(guī)則似乎只是個(gè)幌子,是為了掩蓋其老板最終隨心所欲地發(fā)號(hào)施令,?!?/p>

前Twitter信任與安全委員會(huì)成員朱莉·英曼·格蘭特更加直白。她表示:“你不能指望用指頭堵住堤壩,,就能阻止在平臺(tái)上泛濫的兒童性暴露海嘯,,或者深度造假色情片的洪水?!备裉m特正在起訴該公司,,指控其在兒童性虐待材料方面缺乏透明度。

“根據(jù)我2014年至2016年在Twitter的從業(yè)經(jīng)歷,,這種專(zhuān)業(yè)能力的培養(yǎng)需要好幾年時(shí)間,,而要讓一個(gè)情況糟糕到面目全非的平臺(tái)做出有意義的改變,需要的時(shí)間更長(zhǎng),?!保ㄘ?cái)富中文網(wǎng))

翻譯:劉進(jìn)龍

審校:汪皓

Under owner Elon Musk, X is bringing some content moderators in-house. Will it help?

JONATHAN RAA/NURPHOTO VIA GETTY IMAGES

In the spring of 2023, when X was still called Twitter, the company began planning a new system to keep the most undesirable content off of its platform.

In place of the army of contract workers that policed most social media sites, including Twitter, the company would build its own, smaller, in-house team of content moderators — a specialized safety net to prevent the most egregious stuff from slipping through without crimping too much on Twitter owner Elon Musk’s outspoken commitment to “free speech.”

Last week, nearly one year later, X announced a new Trust and Safety Center of Excellence in Austin, Texas. The 100-person team of content moderators touted by an X representative in a Bloomberg news report is significantly smaller than the 500-person team that was initially envisioned, according to a former trust and safety staffer. And it’s unclear if X has hired more than a dozen or so people so far.

Still, at a time when lawmakers are turning up the heat on social media companies for endangering children, X’s safety center in Austin has clear PR value. The new center will “bring more agents in house to accelerate our impact,” X CEO Linda Yaccarino said at a Senate hearing on Wednesday.

While some critics took note of the opportunistic timing of the announcement, the details of X’s Austin plan raise a bigger question about the Musk-owned platform: Could Musk’s unconventional approach to content moderation outperform the social media industry’s woeful track record of online safety, or does it represent just another means to cut costs by an organization with little interest in deciding which content is appropriate for its users.

According to several content moderation experts and current or former X insiders that Fortune spoke to, a team of in-house specialists could provide significant advantages compared to the current industry norms. But many also stressed the importance of a coherent underlying policy and investments in tools and technology.

“X+100 is better than just X, in terms of moderation capacity,” a source familiar with trust and safety at X explained. But, the person continued, “the number of humans at computers matters less, in some ways, than having clear policies rooted in proven harm reduction strategies, and the tools and systems necessary to implement those policies at scale — both of which have been dismantled since late 2022.”

X did not respond to requests for an interview or to comment for this story.

Linda Yaccarino, CEO of X, at a Senate hearing on online child safety

ANDREW CABALLERO-REYNOLDS/AFP VIA GETTY IMAGES

Why X decided to bring the content police in-house

The flood of problematic content on X has become a frequent topic of public debate—and disputes— since Musk’s $44 billion acquisition closed in November 2022.

After the Center for Countering Digital Hate published a report claiming that X failed to moderate “extreme hate speech,” Musk sued the group for doing calculated harm with “baseless claims.” Meanwhile videos of graphic animal abuse have spread widely on the platform, according to reports. And just last week explicit, AI-generated content featuring Taylor Swift circulated unchecked for 17 hours until the platform shut down the ability to search for her name at all.

X also leans heavily on its Community Notes feature, which allows approved users to add a note to posts with additional context, to moderate its millions of active users, the former trust and safety staffer said. But the person emphasized that this is merely “one tool” that should be used for moderation. What’s more, a Wired investigation uncovered coordinated efforts within the feature to propagate disinformation, highlighting a lack of significant oversight from the company.

By some estimates Musk has cut 80% of the engineers dedicated to trust and safety and thinned the ranks of outsourced content moderators whose job it is to monitor and remove content that violates the company’s policies. In July, Yaccarino announced to staff that three leaders would oversee various aspects of trust and safety, such as law enforcement operations and threat disruptions, Reuters reported. However, according to another source at X, it’s unclear where T&S stands within the organization’s hierarchy, noting that the group doesn’t appear to be “at the top level anymore.”

However, within Musk’s social media company, there also has been an effort to rethink how the job is done.

Change of plans: Welcome to Austin

The new Austin center actually began as a Bay Area center. The intention was to establish the center in a city like San Francisco, which would help recruit top-tier multilingual talent—crucial for countering Internet trolls, since more than 80% of X’s user base don’t live in the U.S., a source familiar with trust and safety under Musk told Fortune. Given the nuances of individual languages, and the idioms and expressions unique to each one, the idea was that someone familiar with a specific language or culture could, for example, better distinguish a joke from a threat than could a low-paid generalist contract worker with no specialized skills.

“They actually started this by hiring people in the Bay Area to test their quality level and whether or not it would work better than having it outsourced,” the former staffer said. “[X] hired a small team of people and tested it out and their ability to make accurate decisions.” The plan called for starting with 75 staffers and to eventually scale it to a 500-person team if it delivered results.

However, Musk, at that time, leaned towards a more cost-effective location, favoring Austin, because he was certain in his ability to attract and potentially relocate individuals proficient in various languages. The change has added a few wrinkles to the project.

“Having to hire hundreds of people and get them up and running and trained and all that is a roughly two to three month process,” the former X staffer explained. “Then you start training and so you know, realistically you’re looking at three, four or five months before you get a team in place. That assumes the job market is awesome, right? And you don’t have to relocate people and all of that fun stuff.”

According to LinkedIn, a dozen recruits have joined X as “trust and safety agents” in Austin over the last month — and most appeared to have moved from Accenture, a firm that provides content moderation contractors to Internet companies. It’s not clear if X has a contract-to-hire plan in place with Accenture—whereby workers retained by consulting firms like Accenture are given full-time roles at a client company when there’s a good fit—but the source confirmed that X has used Accenture’s contracting services in the past.

The trouble with enforcing rules that are constantly shifting

There are a lot of questions about what exactly the Austin team will focus on. Will they focus on content involving only minors, or only in the U.S.? Will they focus on individual posts, or conduct investigations into sexual exploitation?

“100 people in Austin would be one tiny node in what needs to be a global content moderation network,” former Twitter trust and safety council member Anne Collier told Fortune. “100 people in Austin, I wish them luck”

Whatever their task, social media moderation experts agree that the company will need to make a significant investment in AI tools for the team to be most effective.

Facebook for example employed about 15,000 moderators globally in 2020 when it announced it was “marrying AI and human reviewers to make less total mistakes,” the Verge reported at the time. Snap operates similarly, stating in a blog post that it uses “a combination of automated tools and human review to moderate.”

According to the former X insider, the company has experimented with AI moderation. And Musk’s latest push into artificial intelligence technology through X.AI, a one-year old startup that’s developed its own large language model, could provide a valuable resource for the team of human moderators.

An AI system “can tell you in about roughly three seconds for each of those tweets, whether they’re in policy or out of policy, and by the way, they’re at the accuracy levels about 98% whereas with human moderators, no company has better accuracy level than like 65%,” the source said. “You kind of want to see at the same time in parallel what you can do with AI versus just humans and so I think they’re gonna see what that right balance is.”

But no matter how good the AI tools and the moderators, the performance is only as good as the policy that drives it, and that’s an area where X has struggled under Musk.

Policies need to be flexible enough to adapt to cultural contexts, but they also need to be sufficiently predictable for everyone to understand what the rules are, the source familiar with trust and safety at X explained. This is especially important when moderating content on a large platform “where hundreds or even thousands of moderators have to understand and interpret a stable set of rules. You can’t implement rules consistently and accurately if they’re constantly shifting,” the person said.

The loosening of the rules, and the resulting lack of clarity, has been one constant at X under Musk’s stewardship.

After Musk’s takeover, he went on to reinstate a slate of accounts that were banned for breaking the platform’s policies: Rep. Marjorie Taylor Greene violated COVID-19 misinformation policies, the Babylon Bee posted a transphobic story that violated Twitter’s hateful conduct policy, Andrew Tate (banned from Facebook, Instagram, and TikTok) was banned for saying that women should bear “some responsibility” for being sexually assaulted. All these people were reinstated under Musk.

Some outlets speculated there was a link between the exit of Ella Irwin—the final head of trust and safety during Musk’s tenure—and Musk’s criticism of the team’s decision to remove Matt Walsh’s transphobic “What is a Woman?” documentary, a violation of X’s rules. Despite the violation of X’s written policy, Musk insisted the documentary stay up.

“It’s not obvious to me that X moderates in accordance with policies at all anymore. The site’s rules as published online seem to be a pretextual smokescreen to mask its owner ultimately calling the shots in whatever way he sees it,” the source familiar with X moderation added.

Julie Inman Grant, a former Twitter trust and safety council member who is now suing the company for for lack of transparency over CSAM, is more blunt in her assessment: “You cannot just put your finger back in the dike to stem a tsunami of child sexual expose – or a flood of deepfake porn proliferating the platform,” she said.

“In my experience at Twitter from 2014 to 2016, it took literally years to build this expertise – and it will take much more than that to make a meaningful change to a platform that has become so toxic it is almost unrecognizable.”

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專(zhuān)屬所有或持有。未經(jīng)許可,,禁止進(jìn)行轉(zhuǎn)載,、摘編、復(fù)制及建立鏡像等任何使用,。
0條Plus
精彩評(píng)論
評(píng)論

撰寫(xiě)或查看更多評(píng)論

請(qǐng)打開(kāi)財(cái)富Plus APP

前往打開(kāi)
熱讀文章