OpenAI聯(lián)合創(chuàng)始人兼首席執(zhí)行官山姆·阿爾特曼即將在倫敦大學(xué)學(xué)院(University College London)的地下禮堂發(fā)表演講,禮堂共有985個(gè)座位,,排隊(duì)的人從門(mén)口往外排,,排滿了幾階樓梯,一直排到街上,,然后蜿蜒穿過(guò)一個(gè)城市街區(qū),。再往前走,就會(huì)看到六個(gè)舉著標(biāo)語(yǔ)的年輕人,,他們呼吁OpenAI放棄開(kāi)發(fā)通用人工智能的努力——在大多數(shù)認(rèn)知相關(guān)任務(wù)中,,人工智能系統(tǒng)可以達(dá)到人類(lèi)大腦同等智慧,。一名抗議者拿著擴(kuò)音器指責(zé)阿爾特曼的彌賽亞情結(jié)(想要通過(guò)拯救他人來(lái)實(shí)現(xiàn)自救,通過(guò)扮演救世主的角色來(lái)體現(xiàn)自己的價(jià)值),,稱(chēng)他為了實(shí)現(xiàn)自我價(jià)值而冒著毀滅人類(lèi)的風(fēng)險(xiǎn),。
指責(zé)阿爾特曼有彌賽亞情結(jié)可能有點(diǎn)過(guò)火了。但在禮堂里,,阿爾特曼受到了搖滾明星般的待遇,。演講結(jié)束后,他被仰慕者團(tuán)團(tuán)圍住,,讓他擺姿勢(shì)進(jìn)行自拍合影,,并就創(chuàng)業(yè)公司如何更好地創(chuàng)建“護(hù)城河”(結(jié)構(gòu)性競(jìng)爭(zhēng)優(yōu)勢(shì))征求他的意見(jiàn)?!斑@正常嗎,?”當(dāng)我們站在阿爾特曼周?chē)鷵頂D的人群中時(shí),一位難以置信的記者向OpenAI的新聞發(fā)言人提問(wèn),。這位發(fā)言人表示:“我們這次行程所到之處幾乎都是如此,。”
阿爾特曼目前正在進(jìn)行OpenAI“世界巡回演講”——從里約和拉各斯到柏林和東京等城市——與企業(yè)家,、開(kāi)發(fā)者和學(xué)生們討論OpenAI的技術(shù)以及人工智能在更廣泛領(lǐng)域的潛在影響。阿爾特曼以前進(jìn)行過(guò)這樣的世界巡回演講,。但今年,,隨著人工智能聊天機(jī)器人ChatGPT的病毒式流行,它已成為史上增速最快的面向消費(fèi)者的軟件產(chǎn)品,。因此,,進(jìn)行“世界巡回演講”有一種繞場(chǎng)一周慶祝勝利的感覺(jué)。阿爾特曼還將與政府主要領(lǐng)導(dǎo)人會(huì)面,。在倫敦大學(xué)學(xué)院的演講結(jié)束后,,他將與英國(guó)首相里希·蘇納克共進(jìn)晚餐,,并將在布魯塞爾與歐盟官員會(huì)面,。
我們從阿爾特曼的演講中了解到什么?除了其他方面外,,阿爾特曼認(rèn)為埃隆·馬斯克讓他意識(shí)到深度科技投資的重要性,,他還認(rèn)為高級(jí)人工智能將減少全球不平等,他還將教育工作者對(duì)OpenAI的ChatGPT感到恐懼與前幾代人對(duì)計(jì)算器的出現(xiàn)感到絕望相提并論,,但他對(duì)移民火星不感興趣,。
阿爾特曼在美國(guó)參議院作證時(shí)呼吁政府對(duì)人工智能進(jìn)行監(jiān)管,最近還與人合寫(xiě)了一篇博文,,呼吁建立一個(gè)類(lèi)似于國(guó)際原子能機(jī)構(gòu)(International Atomic Energy Agency)這樣的組織,,來(lái)監(jiān)管全球高級(jí)人工智能系統(tǒng)的發(fā)展,。他表示,監(jiān)管機(jī)構(gòu)應(yīng)該在美國(guó)監(jiān)管新技術(shù)方面的傳統(tǒng)自由放任方式和歐洲采取的積極監(jiān)管立場(chǎng)之間取得平衡,。他說(shuō),,他希望看到人工智能的開(kāi)源開(kāi)發(fā)蓬勃發(fā)展。他說(shuō):“有人呼吁停止開(kāi)源運(yùn)動(dòng),,我認(rèn)為這將是真正的恥辱,。”但“如果有人破解了代碼,,并研發(fā)出超級(jí)人工智能(不管你希望如何定義它),。”他警告說(shuō),,“可能制定全球性規(guī)則是合乎情理的,。”
阿爾特曼說(shuō):“對(duì)于可能研發(fā)出超級(jí)人工智能的最大規(guī)模的系統(tǒng),,我們至少應(yīng)該像對(duì)待核材料一樣認(rèn)真對(duì)待它,。”
這位OpenAI首席執(zhí)行官還警告說(shuō),,在他自己公司的機(jī)器人ChatGPT和文本生成圖像工具DALL-E等技術(shù)的助力下,,可以輕易生成大量錯(cuò)誤信息。比起生成式人工智能被用來(lái)擴(kuò)大現(xiàn)有的虛假信息活動(dòng)規(guī)模,,阿爾特曼更擔(dān)心的是,,這項(xiàng)技術(shù)有可能生成量身定制的、有針對(duì)性的虛假信息,。他指出,,OpenAI和其他開(kāi)發(fā)專(zhuān)有人工智能模型的公司可以建立更好的護(hù)欄來(lái)防止此類(lèi)活動(dòng),但他說(shuō),,開(kāi)源開(kāi)發(fā)可能會(huì)破壞這種努力,,因?yàn)殚_(kāi)源開(kāi)發(fā)允許用戶修改軟件并移除護(hù)欄。盡管監(jiān)管“可能會(huì)有所幫助”,,但阿爾特曼表示,,人們需要成為批判性的信息消費(fèi)者,并將其與圖像處理軟件Adobe Photoshop首次發(fā)布時(shí),,人們對(duì)數(shù)字編輯照片感到擔(dān)憂的時(shí)期進(jìn)行比較,。他說(shuō):“同樣的事情也會(huì)發(fā)生在這些新技術(shù)上。但我認(rèn)為,,我們?cè)皆缱屓藗兞私膺@一點(diǎn)越好,,因?yàn)檫@樣做能夠引起更普遍的情感共鳴。”
阿爾特曼對(duì)人工智能的看法比以往更為樂(lè)觀,。雖然有些人認(rèn)為,,生成式人工智能系統(tǒng)會(huì)導(dǎo)致普通工人工資下降或造成大規(guī)模失業(yè),進(jìn)而加劇全球不平等,,但阿爾特曼表示,,他認(rèn)為事實(shí)恰恰相反。他指出,,人工智能有助于實(shí)現(xiàn)全球經(jīng)濟(jì)增長(zhǎng),,并提高生產(chǎn)力,從而幫助人們擺脫貧困,,并創(chuàng)造新機(jī)會(huì),。他說(shuō):“我對(duì)這項(xiàng)技術(shù)感到非常興奮,它能夠恢復(fù)過(guò)去幾十年失去的生產(chǎn)力,,并實(shí)現(xiàn)超越追趕,。”他提出了基本觀點(diǎn):全球兩大“限制因素”是智力成本和能源成本,。他表示,,如果這兩者的成本能夠大幅降低,那么它們對(duì)窮人的幫助應(yīng)該比對(duì)富人的幫助更大,?!叭斯ぶ悄芗夹g(shù)將改變整個(gè)世界?!彼f(shuō),。
阿爾特曼還提到,他認(rèn)為人工智能存在不同版本,,包括超級(jí)人工智能,。一些人,,包括阿爾特曼過(guò)去曾說(shuō)過(guò),,盡管這種未來(lái)技術(shù)可能對(duì)人類(lèi)構(gòu)成嚴(yán)重威脅,但它實(shí)際上是可以被控制的,。他說(shuō):“我過(guò)去對(duì)超級(jí)人工智能未來(lái)走向的看法是,,我們將構(gòu)建一個(gè)極其強(qiáng)大的系統(tǒng)?!彼赋?,這樣的系統(tǒng)本質(zhì)上是非常危險(xiǎn)的?!拔椰F(xiàn)在認(rèn)為,,我們已經(jīng)找到了發(fā)展路徑:可以創(chuàng)建越來(lái)越強(qiáng)大的工具,并且數(shù)十億、數(shù)萬(wàn)億的副本能夠在世界各地廣泛使用,。它們可以幫助個(gè)人提高效率,,從而能夠完成更多任務(wù),個(gè)人的產(chǎn)出可能顯著增加,。超級(jí)人工智能的作用不僅僅體現(xiàn)在支持最大的單一神經(jīng)網(wǎng)絡(luò)方面,,還體現(xiàn)在我們正在發(fā)現(xiàn)的所有新科學(xué)和我們正在創(chuàng)造的一切新事物中。
當(dāng)被問(wèn)及從不同導(dǎo)師那里學(xué)到了什么時(shí),,阿爾特曼提到了埃隆·馬斯克,。他說(shuō):“當(dāng)然,我從埃隆身上學(xué)到什么是能夠完成的,,以及你不需要接受這樣的事實(shí):你不能忽視艱苦研發(fā)和硬技術(shù)的重要性,,這是很有價(jià)值的?!?/p>
阿爾特曼還回答了一個(gè)問(wèn)題,,即他是否認(rèn)為人工智能可以幫助人類(lèi)在火星上定居。他說(shuō):“聽(tīng)著,,我不想去火星生活,,這聽(tīng)起來(lái)很可怕,但我對(duì)其他人想去火星生活感到高興,?!卑柼芈ㄗh應(yīng)該首先把機(jī)器人送到火星上,幫助改造火星,,使它更適合人類(lèi)居住,。
在禮堂外,抗議者繼續(xù)高呼反對(duì)這位OpenAI首席執(zhí)行官,。但在與會(huì)者停下來(lái)詢(xún)問(wèn)他們的抗議活動(dòng)時(shí),,他們也停下來(lái)與好奇的與會(huì)者進(jìn)行認(rèn)真交談。
“我們努力做的是提高人們的認(rèn)識(shí),,即人工智能確實(shí)對(duì)人類(lèi)構(gòu)成了威脅和風(fēng)險(xiǎn),,包括就業(yè)和經(jīng)濟(jì)、偏見(jiàn),、錯(cuò)誤信息,、社會(huì)兩極分化和僵化,但也造成稍長(zhǎng)期,,卻非真正長(zhǎng)期的,,更攸關(guān)人類(lèi)存亡的威脅?!睅椭M織抗議活動(dòng)的,、時(shí)年27歲的倫敦大學(xué)學(xué)院政治學(xué)和倫理學(xué)研究生阿利斯泰爾·斯圖爾特說(shuō),。
斯圖爾特引用了最近對(duì)人工智能專(zhuān)家的一項(xiàng)調(diào)查,該調(diào)查發(fā)現(xiàn),,48%的專(zhuān)家認(rèn)為,,高級(jí)人工智能系統(tǒng)導(dǎo)致人類(lèi)滅絕或帶來(lái)其他嚴(yán)重威脅的可能性為10%或更高。他說(shuō),,他和其他抗議奧爾特曼出席此類(lèi)活動(dòng)的人呼吁暫停開(kāi)發(fā)比OpenAI的GPT-4大型語(yǔ)言模型更強(qiáng)大的人工智能系統(tǒng),,直到研究人員“解決對(duì)齊問(wèn)題”——這一術(shù)語(yǔ)意味著找到一種方法來(lái)防止未來(lái)的超級(jí)人工智能系統(tǒng)采取可能對(duì)人類(lèi)文明造成損害的行動(dòng)。
這一暫停開(kāi)發(fā)的呼吁呼應(yīng)了包括馬斯克和一些知名人工智能研究人員和企業(yè)家在內(nèi)的數(shù)千名公開(kāi)信的簽名者所發(fā)出的呼吁,,該信由生命未來(lái)研究所(Future of Life Institute)于3月底發(fā)表,。
斯圖爾特說(shuō),他所在的組織希望提高公眾對(duì)人工智能所帶來(lái)的威脅的認(rèn)識(shí),,這樣他們就可以向政治家施壓,,要求他們采取行動(dòng),對(duì)該技術(shù)進(jìn)行監(jiān)管,。本周早些時(shí)候,,一個(gè)自稱(chēng)“暫停人工智能”(Pause AI)的組織的抗議者也開(kāi)始在谷歌DeepMind(另一個(gè)高級(jí)人工智能研究實(shí)驗(yàn)室)的倫敦辦公室前進(jìn)行抗議。斯圖爾特說(shuō),,他所在的組織并不隸屬于“暫停人工智能”,,盡管這兩個(gè)組織有許多相同的宗旨和目標(biāo)。 (財(cái)富中文網(wǎng))
譯者:中慧言-王芳
OpenAI聯(lián)合創(chuàng)始人兼首席執(zhí)行官山姆·阿爾特曼即將在倫敦大學(xué)學(xué)院(University College London)的地下禮堂發(fā)表演講,,禮堂共有985個(gè)座位,,排隊(duì)的人從門(mén)口往外排,排滿了幾階樓梯,,一直排到街上,,然后蜿蜒穿過(guò)一個(gè)城市街區(qū)。再往前走,,就會(huì)看到六個(gè)舉著標(biāo)語(yǔ)的年輕人,,他們呼吁OpenAI放棄開(kāi)發(fā)通用人工智能的努力——在大多數(shù)認(rèn)知相關(guān)任務(wù)中,人工智能系統(tǒng)可以達(dá)到人類(lèi)大腦同等智慧,。一名抗議者拿著擴(kuò)音器指責(zé)阿爾特曼的彌賽亞情結(jié)(想要通過(guò)拯救他人來(lái)實(shí)現(xiàn)自救,,通過(guò)扮演救世主的角色來(lái)體現(xiàn)自己的價(jià)值),稱(chēng)他為了實(shí)現(xiàn)自我價(jià)值而冒著毀滅人類(lèi)的風(fēng)險(xiǎn),。
指責(zé)阿爾特曼有彌賽亞情結(jié)可能有點(diǎn)過(guò)火了,。但在禮堂里,,阿爾特曼受到了搖滾明星般的待遇,。演講結(jié)束后,他被仰慕者團(tuán)團(tuán)圍住,,讓他擺姿勢(shì)進(jìn)行自拍合影,,并就創(chuàng)業(yè)公司如何更好地創(chuàng)建“護(hù)城河”(結(jié)構(gòu)性競(jìng)爭(zhēng)優(yōu)勢(shì))征求他的意見(jiàn)。“這正常嗎,?”當(dāng)我們站在阿爾特曼周?chē)鷵頂D的人群中時(shí),,一位難以置信的記者向OpenAI的新聞發(fā)言人提問(wèn)。這位發(fā)言人表示:“我們這次行程所到之處幾乎都是如此,?!?
阿爾特曼目前正在進(jìn)行OpenAI“世界巡回演講”——從里約和拉各斯到柏林和東京等城市——與企業(yè)家、開(kāi)發(fā)者和學(xué)生們討論OpenAI的技術(shù)以及人工智能在更廣泛領(lǐng)域的潛在影響,。阿爾特曼以前進(jìn)行過(guò)這樣的世界巡回演講,。但今年,隨著人工智能聊天機(jī)器人ChatGPT的病毒式流行,,它已成為史上增速最快的面向消費(fèi)者的軟件產(chǎn)品,。因此,進(jìn)行“世界巡回演講”有一種繞場(chǎng)一周慶祝勝利的感覺(jué),。阿爾特曼還將與政府主要領(lǐng)導(dǎo)人會(huì)面,。在倫敦大學(xué)學(xué)院的演講結(jié)束后,他將與英國(guó)首相里?!ぬK納克共進(jìn)晚餐,,并將在布魯塞爾與歐盟官員會(huì)面。
我們從阿爾特曼的演講中了解到什么,?除了其他方面外,,阿爾特曼認(rèn)為埃隆·馬斯克讓他意識(shí)到深度科技投資的重要性,他還認(rèn)為高級(jí)人工智能將減少全球不平等,,他還將教育工作者對(duì)OpenAI的ChatGPT感到恐懼與前幾代人對(duì)計(jì)算器的出現(xiàn)感到絕望相提并論,,但他對(duì)移民火星不感興趣。
阿爾特曼在美國(guó)參議院作證時(shí)呼吁政府對(duì)人工智能進(jìn)行監(jiān)管,,最近還與人合寫(xiě)了一篇博文,,呼吁建立一個(gè)類(lèi)似于國(guó)際原子能機(jī)構(gòu)(International Atomic Energy Agency)這樣的組織,來(lái)監(jiān)管全球高級(jí)人工智能系統(tǒng)的發(fā)展,。他表示,,監(jiān)管機(jī)構(gòu)應(yīng)該在美國(guó)監(jiān)管新技術(shù)方面的傳統(tǒng)自由放任方式和歐洲采取的積極監(jiān)管立場(chǎng)之間取得平衡。他說(shuō),,他希望看到人工智能的開(kāi)源開(kāi)發(fā)蓬勃發(fā)展,。他說(shuō):“有人呼吁停止開(kāi)源運(yùn)動(dòng),我認(rèn)為這將是真正的恥辱,?!钡叭绻腥似平饬舜a,并研發(fā)出超級(jí)人工智能(不管你希望如何定義它),?!彼嬲f(shuō),,“可能制定全球性規(guī)則是合乎情理的?!?
阿爾特曼說(shuō):“對(duì)于可能研發(fā)出超級(jí)人工智能的最大規(guī)模的系統(tǒng),,我們至少應(yīng)該像對(duì)待核材料一樣認(rèn)真對(duì)待它?!?
這位OpenAI首席執(zhí)行官還警告說(shuō),,在他自己公司的機(jī)器人ChatGPT和文本生成圖像工具DALL-E等技術(shù)的助力下,可以輕易生成大量錯(cuò)誤信息,。比起生成式人工智能被用來(lái)擴(kuò)大現(xiàn)有的虛假信息活動(dòng)規(guī)模,,阿爾特曼更擔(dān)心的是,這項(xiàng)技術(shù)有可能生成量身定制的,、有針對(duì)性的虛假信息,。他指出,OpenAI和其他開(kāi)發(fā)專(zhuān)有人工智能模型的公司可以建立更好的護(hù)欄來(lái)防止此類(lèi)活動(dòng),,但他說(shuō),,開(kāi)源開(kāi)發(fā)可能會(huì)破壞這種努力,因?yàn)殚_(kāi)源開(kāi)發(fā)允許用戶修改軟件并移除護(hù)欄,。盡管監(jiān)管“可能會(huì)有所幫助”,,但阿爾特曼表示,人們需要成為批判性的信息消費(fèi)者,,并將其與圖像處理軟件Adobe Photoshop首次發(fā)布時(shí),,人們對(duì)數(shù)字編輯照片感到擔(dān)憂的時(shí)期進(jìn)行比較。他說(shuō):“同樣的事情也會(huì)發(fā)生在這些新技術(shù)上,。但我認(rèn)為,,我們?cè)皆缱屓藗兞私膺@一點(diǎn)越好,因?yàn)檫@樣做能夠引起更普遍的情感共鳴,?!?/p>
阿爾特曼對(duì)人工智能的看法比以往更為樂(lè)觀。雖然有些人認(rèn)為,,生成式人工智能系統(tǒng)會(huì)導(dǎo)致普通工人工資下降或造成大規(guī)模失業(yè),,進(jìn)而加劇全球不平等,但阿爾特曼表示,,他認(rèn)為事實(shí)恰恰相反,。他指出,人工智能有助于實(shí)現(xiàn)全球經(jīng)濟(jì)增長(zhǎng),,并提高生產(chǎn)力,,從而幫助人們擺脫貧困,并創(chuàng)造新機(jī)會(huì),。他說(shuō):“我對(duì)這項(xiàng)技術(shù)感到非常興奮,,它能夠恢復(fù)過(guò)去幾十年失去的生產(chǎn)力,并實(shí)現(xiàn)超越追趕,?!彼岢隽嘶居^點(diǎn):全球兩大“限制因素”是智力成本和能源成本。他表示,,如果這兩者的成本能夠大幅降低,,那么它們對(duì)窮人的幫助應(yīng)該比對(duì)富人的幫助更大?!叭斯ぶ悄芗夹g(shù)將改變整個(gè)世界,。”他說(shuō),。
阿爾特曼還提到,,他認(rèn)為人工智能存在不同版本,包括超級(jí)人工智能,。一些人,,包括阿爾特曼過(guò)去曾說(shuō)過(guò),盡管這種未來(lái)技術(shù)可能對(duì)人類(lèi)構(gòu)成嚴(yán)重威脅,,但它實(shí)際上是可以被控制的,。他說(shuō):“我過(guò)去對(duì)超級(jí)人工智能未來(lái)走向的看法是,我們將構(gòu)建一個(gè)極其強(qiáng)大的系統(tǒng),?!彼赋觯@樣的系統(tǒng)本質(zhì)上是非常危險(xiǎn)的,?!拔椰F(xiàn)在認(rèn)為,我們已經(jīng)找到了發(fā)展路徑:可以創(chuàng)建越來(lái)越強(qiáng)大的工具,,并且數(shù)十億,、數(shù)萬(wàn)億的副本能夠在世界各地廣泛使用。它們可以幫助個(gè)人提高效率,,從而能夠完成更多任務(wù),,個(gè)人的產(chǎn)出可能顯著增加。超級(jí)人工智能的作用不僅僅體現(xiàn)在支持最大的單一神經(jīng)網(wǎng)絡(luò)方面,,還體現(xiàn)在我們正在發(fā)現(xiàn)的所有新科學(xué)和我們正在創(chuàng)造的一切新事物中,。
當(dāng)被問(wèn)及從不同導(dǎo)師那里學(xué)到了什么時(shí),阿爾特曼提到了埃隆·馬斯克,。他說(shuō):“當(dāng)然,,我從埃隆身上學(xué)到什么是能夠完成的,以及你不需要接受這樣的事實(shí):你不能忽視艱苦研發(fā)和硬技術(shù)的重要性,,這是很有價(jià)值的,?!?/p>
阿爾特曼還回答了一個(gè)問(wèn)題,即他是否認(rèn)為人工智能可以幫助人類(lèi)在火星上定居,。他說(shuō):“聽(tīng)著,,我不想去火星生活,這聽(tīng)起來(lái)很可怕,,但我對(duì)其他人想去火星生活感到高興,。”阿爾特曼建議應(yīng)該首先把機(jī)器人送到火星上,,幫助改造火星,,使它更適合人類(lèi)居住。
在禮堂外,,抗議者繼續(xù)高呼反對(duì)這位OpenAI首席執(zhí)行官,。但在與會(huì)者停下來(lái)詢(xún)問(wèn)他們的抗議活動(dòng)時(shí),他們也停下來(lái)與好奇的與會(huì)者進(jìn)行認(rèn)真交談,。
“我們努力做的是提高人們的認(rèn)識(shí),,即人工智能確實(shí)對(duì)人類(lèi)構(gòu)成了威脅和風(fēng)險(xiǎn),包括就業(yè)和經(jīng)濟(jì),、偏見(jiàn),、錯(cuò)誤信息、社會(huì)兩極分化和僵化,,但也造成稍長(zhǎng)期,,卻非真正長(zhǎng)期的,更攸關(guān)人類(lèi)存亡的威脅,?!睅椭M織抗議活動(dòng)的、時(shí)年27歲的倫敦大學(xué)學(xué)院政治學(xué)和倫理學(xué)研究生阿利斯泰爾·斯圖爾特說(shuō),。
斯圖爾特引用了最近對(duì)人工智能專(zhuān)家的一項(xiàng)調(diào)查,,該調(diào)查發(fā)現(xiàn),48%的專(zhuān)家認(rèn)為,,高級(jí)人工智能系統(tǒng)導(dǎo)致人類(lèi)滅絕或帶來(lái)其他嚴(yán)重威脅的可能性為10%或更高,。他說(shuō),,他和其他抗議奧爾特曼出席此類(lèi)活動(dòng)的人呼吁暫停開(kāi)發(fā)比OpenAI的GPT-4大型語(yǔ)言模型更強(qiáng)大的人工智能系統(tǒng),,直到研究人員“解決對(duì)齊問(wèn)題”——這一術(shù)語(yǔ)意味著找到一種方法來(lái)防止未來(lái)的超級(jí)人工智能系統(tǒng)采取可能對(duì)人類(lèi)文明造成損害的行動(dòng),。
這一暫停開(kāi)發(fā)的呼吁呼應(yīng)了包括馬斯克和一些知名人工智能研究人員和企業(yè)家在內(nèi)的數(shù)千名公開(kāi)信的簽名者所發(fā)出的呼吁,,該信由生命未來(lái)研究所(Future of Life Institute)于3月底發(fā)表。
斯圖爾特說(shuō),,他所在的組織希望提高公眾對(duì)人工智能所帶來(lái)的威脅的認(rèn)識(shí),,這樣他們就可以向政治家施壓,,要求他們采取行動(dòng),對(duì)該技術(shù)進(jìn)行監(jiān)管,。本周早些時(shí)候,,一個(gè)自稱(chēng)“暫停人工智能”(Pause AI)的組織的抗議者也開(kāi)始在谷歌DeepMind(另一個(gè)高級(jí)人工智能研究實(shí)驗(yàn)室)的倫敦辦公室前進(jìn)行抗議。斯圖爾特說(shuō),,他所在的組織并不隸屬于“暫停人工智能”,,盡管這兩個(gè)組織有許多相同的宗旨和目標(biāo)。 (財(cái)富中文網(wǎng))
譯者:中慧言-王芳
The line to enter the 985-seat basement auditorium at University College London where OpenAI cofounder and CEO Sam Altman is about to speak stretches out the door, snakes up several flights of stairs, carries on into the street, and then meanders most of the way down a city block. It inches forward, past a half-dozen young men holding signs calling for OpenAI to abandon efforts to develop artificial general intelligence—or A.I. systems that are as capable as humans at most cognitive tasks. One protester, speaking into a megaphone, accuses Altman of having a Messiah complex and risking the destruction of humanity for the sake of his ego.
Messiah might be taking it a bit far. But inside the hall, Altman received a rock star reception. After his talk, he was mobbed by admirers, asking him to pose for selfies and soliciting advice on the best way for a startup to build a “moat.” “Is this normal?” one incredulous reporter asks an OpenAI press handler as we stand in the tight scrum around Altman. “It’s been like this pretty much everywhere we’ve been on this trip,” the spokesperson says.
Altman is currently on an OpenAI “world tour”—visiting cities from Rio and Lagos to Berlin and Tokyo—to talk to entrepreneurs, developers, and students about OpenAI’s technology and the potential impact of A.I. more broadly. Altman has done this kind of world trip before. But this year, after the viral popularity of A.I.-powered chatbot ChatGPT, which has become the fastest growing consumer software product in history, it has the feeling of a victory lap. Altman is also meeting with key government leaders. Following his UCL appearance, he was off to meet U.K. Prime Minister Rishi Sunak for dinner, and he will be meeting with European Union officials in Brussels.
What did we learn from Altman’s talk? Among other things, that he credits Elon Musk with convincing him of the importance of deep tech investing, that he thinks advanced A.I. will reduce global inequality, that he equates educators’ fears of OpenAI’s ChatGPT with earlier generations’ hand-wringing over the calculator, and that he has no interest in living on Mars.
Altman, who has called on government to regulate A.I. in testimony before the U.S. Senate and recently coauthored a blog post calling for the creation of an organization like the International Atomic Energy Agency to police the development of advanced A.I. systems globally, said that regulators should strike a balance between America’s traditional laissez-faire approach to regulating new technologies and Europe’s more proactive stance. He said that he wants to see the open source development of A.I. thrive. “There’s this call to stop the open source movement that I think would be a real shame,” he said. But he warned that “if someone does crack the code and builds a superintelligence, however you want to define that, probably some global rules on that are appropriate.”
“We should treat this as least as seriously as we treat nuclear material, for the biggest scale systems that could give birth to superintelligence,” Altman said.
The OpenAI CEO also warned about the ease of churning out massive amounts of misinformation thanks to technology like his own company’s ChatGPT bot and DALL-E text-to-image tool. More worrisome to Altman than generative A.I. being used to scale up existing disinformation campaigns, he pointed to the tech’s potential to create individually tailored and targeted disinformation. OpenAI and others developing proprietary A.I. models could build better guardrails against such activity, he noted—but he said the effort could be undermined by open source development, which allows users to modify software and remove guardrails. And while regulation “could help some,” Altman said that people will need to become much more critical consumers of information, comparing it to the period when Adobe Photoshop was first released and people were concerned about digitally edited photographs. “The same thing will happen with these new technologies,” he said. “But the sooner we can educate people about it, because the emotional resonance is going to be so much higher, I think the better.”
Altman posited a more optimistic vision of A.I. than he has sometimes suggested in the past. While some have postulated that generative A.I. systems will make global inequality worse by depressing wages for average workers or causing mass unemployment, Altman said he thought the opposite would be true. He noted that enhancing economic growth and productivity globally, ought to lift people out of poverty and create new opportunities. “I’m excited that this technology can, like, bring the missing productivity gains of the last few decades back, and more than catch up,” he said. He noted his basic thesis, that the two “l(fā)imiting reagents” of the world are the cost of intelligence and the cost of energy. If those two become dramatically less expensive, he said, it ought to help poorer people more than rich people. “This technology will lift all of the world up,” he said.
He also said he thought there were versions of A.I. superintelligence, a future technology that some, including Altman in the past, have said could pose severe dangers to all of humanity, that can be controlled. “The way I used to think about heading towards superintelligence is that we were going to build this one, extremely capable system,” he said, noting that such a system would be inherently very dangerous. “I think we now see a path where we very much build these tools that get more and more powerful, and there are billions of copies, trillions of copies being used in the world, helping individual people be way more effective, capable of doing way more; the amount of output that one person can have can dramatically increase. And where the superintelligence emerges is not just the capability of our biggest single neural network but all of the new science we are discovering, all of the new things we’re creating.”
In response to a question about what he learned from various mentors, Altman cited Elon Musk. “Certainly learning from Elon about what is just, like, possible to do and that you don’t need to accept that, like, hard R&D and hard technology is not something you ignore, that’s been super valuable,” he said.
He also fielded a question about whether he thought A.I. could help human settlement of Mars. “Look, I have no desire to go live on Mars, it sounds horrible,” he said. “But I’m happy other people do.” He said robots should be sent to Mars first to help terraform the planet to make it more hospitable for human habitation.
Outside the auditorium, the protesters kept up their chants against the OpenAI CEO. But they also paused to chat thoughtfully with curious attendees who stopped by to ask them about their protest.
“What we’re trying to do is raise awareness that A.I. does pose these threats and risks to humanity right now in terms of jobs and the economy, bias, misinformation, societal polarization, and ossification, but also slightly longer term, but not really long term, more existential threats,” said Alistair Stewart, a 27-year-old graduate student in political science and ethics at UCL who helped organize the protests.
Stewart cited a recent survey of A.I. experts that found 48% of them thought there was a 10% or greater chance of human extinction or other grave threats from advanced A.I. systems. He said that he and others protesting Altman’s appearance were calling for a pause in the development of A.I. systems more powerful than OpenAI’s GPT-4 large language model until researchers had “solved alignment”—a phrase that basically means figuring out a way to prevent a future superintelligent A.I. system from taking actions that would cause harm to human civilization.
That call for a pause echoes the one made by thousands of signatories of an open letter, including Musk and a number of well-known A.I. researchers and entrepreneurs, that was published by the Future of Life Institute in late March.
Stewart said his group wanted to raise public awareness of the threat posed by A.I. so that they could pressure politicians to take action and regulate the technology. Earlier this week, protesters from a group calling itself Pause AI have also begun picketing the London offices of Google DeepMind, another advanced A.I. research lab. Stewart said his group was not affiliated with Pause AI, although the two groups shared many of the same goals and objectives.