
埃隆·馬斯克(Elon Musk)曾多次將人工智能稱為“文明風險”,。人工智能教父之一杰弗里·辛頓(Geoffrey Hinton)最近改變了論調(diào),稱人工智能是“生存威脅”,。
但DeepMind公司聯(lián)合創(chuàng)始人穆斯塔法·蘇萊曼(Mustafa Suleyman)最近發(fā)表了不同觀點,。Deepmind之前得到了馬斯克的支持,已在人工智能領域發(fā)展了十多年,。蘇萊曼是最新出版的《即將到來的浪潮:技術,、權(quán)力和21世紀最大的困境》(“The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma”)一書的合著者。作為該領域最杰出,、從業(yè)時間最長的專家之一,,他認為,AI帶來了影響深遠的問題,,但其威脅并不像其他人認為的那樣緊迫,。事實上,從現(xiàn)在開始,,挑戰(zhàn)相當簡單明了,。
人工智能技術帶來的風險是整個2023年公眾辯論的焦點,成為媒體熱議的話題,。穆斯塔法上周對《麻省理工科技評論》(MIT Technology Review)表示:“我只是認為,,生存風險論完全是庸人自擾,。我們應該討論更實際的基本問題(從隱私到偏見,從面部識別到在線審核),?!?/p>
他表示,最緊迫的問題尤其應該是監(jiān)管,。蘇萊曼對世界各國政府能夠有效監(jiān)管人工智能持樂觀態(tài)度,。蘇萊曼表示:“我認為所有人都陷入恐慌:認為我們無法對其進行監(jiān)管。這簡直是無稽之談,。我們完全有能力監(jiān)管人工智能,。我們將采用相同的框架(之前行之有效)?!?/p>
他的信念在一定程度上源于,,在航空和互聯(lián)網(wǎng)等曾被視為前沿技術的領域,各國都實現(xiàn)了有效監(jiān)管,。他認為:如果商業(yè)航班沒有合理的安全協(xié)議,,乘客將永遠不會信任航空公司,這將損害其業(yè)務,。在互聯(lián)網(wǎng)上,,消費者可以訪問無數(shù)的網(wǎng)站,但像販賣毒品或宣傳恐怖主義這樣的活動是明令禁止的,,盡管沒有完全消除,。
另一方面,正如《麻省理工科技評論》的威爾·道格拉斯·海恩(Will Douglas Heaven)向蘇萊曼指出的那樣,,一些觀察人士認為,,目前的互聯(lián)網(wǎng)監(jiān)管存在缺陷:沒有追究大型科技公司的全部責任。特別是,,作為現(xiàn)行互聯(lián)網(wǎng)法律基石之一的《通信規(guī)范法》第230條規(guī)定,,公司無須為第三方或用戶在他們平臺發(fā)布的內(nèi)容承擔責任。這是某些大型社交媒體公司成立的基礎,,使其免于為網(wǎng)站上分享的內(nèi)容承擔任何責任,。今年2月,最高法院審理了兩起可能改變互聯(lián)網(wǎng)立法格局的案件,。
為了實現(xiàn)對人工智能的監(jiān)管,,蘇萊曼希望結(jié)合廣泛的國際監(jiān)管來創(chuàng)建新的監(jiān)管機構(gòu),并(在“微觀層面”)將政策落細落小,。所有雄心勃勃的人工智能監(jiān)管機構(gòu)和開發(fā)人員可以采取的第一步就是限制“遞歸自我改進”,,即人工智能的自我改進能力。限制人工智能的這一特定能力將是關鍵的第一步,,以確保其未來的發(fā)展不會完全脫離人類的監(jiān)督,。
“你不會想讓你的小人工智能在沒有你監(jiān)督的情況下自行更新代碼,。”蘇萊曼說,?!耙苍S這甚至應該是一項需要獲得許可的活動——(你知道的)就像處理炭疽或核材料一樣?!?/p>
如果不對人工智能的細節(jié)進行管理,,有時會引入使用的“實際代碼”,立法者將很難確保其法律的可執(zhí)行性,。蘇萊曼說:"這關系到設定人工智能無法逾越的界限,。“
為了確保實現(xiàn)上述愿景,,政府應該能夠“直接管理”人工智能開發(fā)人員,,以確保他們不會跨越最終設定的任何界限,。其中一些界限應該明確標出,,比如禁止聊天機器人回答某些問題,或是對個人數(shù)據(jù)進行隱私保護,。
世界各國政府都在制定人工智能法規(guī)
周二,,美國總統(tǒng)喬·拜登(Joe Biden)在聯(lián)合國發(fā)表演講時也表達了類似的觀點,他呼吁世界各國領導人攜手合作,,減輕人工智能帶來的“巨大危險”,,同時確保人工智能仍“為善所用”。
而在美國國內(nèi),,鑒于人工智能技術發(fā)展變化如此之快,,參議院多數(shù)黨領袖查克·舒默(Chuck Schumer,紐約州民主黨人)敦促國會議員迅速采取行動,,對人工智能進行監(jiān)管,。上周,舒默邀請了特斯拉(Tesla)首席執(zhí)行官埃隆·馬斯克(Eon Musk),、微軟(Microsoft)首席執(zhí)行官薩蒂亞·納德拉(Satya Nadella)和Alphabet首席執(zhí)行官桑達爾·皮查伊(Sundar Pichai)等大型科技公司高管到華盛頓開會,,討論未來人工智能監(jiān)管問題。一些國會議員對邀請硅谷高管討論旨在監(jiān)管其公司的政策的決定持懷疑態(tài)度,。
歐盟(最早監(jiān)管人工智能的政府機構(gòu)之一)在6月通過了一項立法草案,,要求開發(fā)人員分享用于訓練模型的數(shù)據(jù),并嚴格限制面部識別軟件的使用——蘇萊曼也表示應該限制這種軟件的使用,?!稌r代》雜志的一篇報道發(fā)現(xiàn),ChatGPT的制造商OpenAI曾游說歐盟官員弱化其擬議立法的部分法條,。
中國也是最早出臺最完善的人工智能監(jiān)管法規(guī)的國家之一,。今年7月,,中國國家互聯(lián)網(wǎng)信息辦公室發(fā)布了《生成式人工智能服務管理暫行辦法》,包括明確要求遵守現(xiàn)行版權(quán)法,,并規(guī)定哪些類型的開發(fā)需要政府批準,。
蘇萊曼則堅信,政府在未來的人工智能監(jiān)管中可以發(fā)揮關鍵作用,?!拔覠釔勖褡鍑摇,!彼f,。“我相信監(jiān)管的力量,。我呼吁民族國家采取行動來解決這一問題,。事關重大,現(xiàn)在是時候采取行動了,?!保ㄘ敻恢形木W(wǎng))
譯者:中慧言-王芳
埃隆·馬斯克(Elon Musk)曾多次將人工智能稱為“文明風險”。人工智能教父之一杰弗里·辛頓(Geoffrey Hinton)最近改變了論調(diào),,稱人工智能是“生存威脅”,。
但DeepMind公司聯(lián)合創(chuàng)始人穆斯塔法·蘇萊曼(Mustafa Suleyman)最近發(fā)表了不同觀點。Deepmind之前得到了馬斯克的支持,,已在人工智能領域發(fā)展了十多年,。蘇萊曼是最新出版的《即將到來的浪潮:技術、權(quán)力和21世紀最大的困境》(“The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma”)一書的合著者,。作為該領域最杰出,、從業(yè)時間最長的專家之一,他認為,,AI帶來了影響深遠的問題,,但其威脅并不像其他人認為的那樣緊迫。事實上,,從現(xiàn)在開始,,挑戰(zhàn)相當簡單明了。
人工智能技術帶來的風險是整個2023年公眾辯論的焦點,,成為媒體熱議的話題,。穆斯塔法上周對《麻省理工科技評論》(MIT Technology Review)表示:“我只是認為,生存風險論完全是庸人自擾,。我們應該討論更實際的基本問題(從隱私到偏見,,從面部識別到在線審核)?!?/p>
他表示,,最緊迫的問題尤其應該是監(jiān)管,。蘇萊曼對世界各國政府能夠有效監(jiān)管人工智能持樂觀態(tài)度。蘇萊曼表示:“我認為所有人都陷入恐慌:認為我們無法對其進行監(jiān)管,。這簡直是無稽之談,。我們完全有能力監(jiān)管人工智能。我們將采用相同的框架(之前行之有效),?!?/p>
他的信念在一定程度上源于,在航空和互聯(lián)網(wǎng)等曾被視為前沿技術的領域,,各國都實現(xiàn)了有效監(jiān)管,。他認為:如果商業(yè)航班沒有合理的安全協(xié)議,乘客將永遠不會信任航空公司,,這將損害其業(yè)務,。在互聯(lián)網(wǎng)上,消費者可以訪問無數(shù)的網(wǎng)站,,但像販賣毒品或宣傳恐怖主義這樣的活動是明令禁止的,,盡管沒有完全消除。
另一方面,,正如《麻省理工科技評論》的威爾·道格拉斯·海恩(Will Douglas Heaven)向蘇萊曼指出的那樣,,一些觀察人士認為,,目前的互聯(lián)網(wǎng)監(jiān)管存在缺陷:沒有追究大型科技公司的全部責任,。特別是,作為現(xiàn)行互聯(lián)網(wǎng)法律基石之一的《通信規(guī)范法》第230條規(guī)定,,公司無須為第三方或用戶在他們平臺發(fā)布的內(nèi)容承擔責任,。這是某些大型社交媒體公司成立的基礎,使其免于為網(wǎng)站上分享的內(nèi)容承擔任何責任,。今年2月,,最高法院審理了兩起可能改變互聯(lián)網(wǎng)立法格局的案件。
為了實現(xiàn)對人工智能的監(jiān)管,,蘇萊曼希望結(jié)合廣泛的國際監(jiān)管來創(chuàng)建新的監(jiān)管機構(gòu),,并(在“微觀層面”)將政策落細落小。所有雄心勃勃的人工智能監(jiān)管機構(gòu)和開發(fā)人員可以采取的第一步就是限制“遞歸自我改進”,,即人工智能的自我改進能力,。限制人工智能的這一特定能力將是關鍵的第一步,以確保其未來的發(fā)展不會完全脫離人類的監(jiān)督,。
“你不會想讓你的小人工智能在沒有你監(jiān)督的情況下自行更新代碼,。”蘇萊曼說,?!耙苍S這甚至應該是一項需要獲得許可的活動——(你知道的)就像處理炭疽或核材料一樣,。”
如果不對人工智能的細節(jié)進行管理,,有時會引入使用的“實際代碼”,,立法者將很難確保其法律的可執(zhí)行性。蘇萊曼說:"這關系到設定人工智能無法逾越的界限,?!?/p>
為了確保實現(xiàn)上述愿景,政府應該能夠“直接管理”人工智能開發(fā)人員,,以確保他們不會跨越最終設定的任何界限,。其中一些界限應該明確標出,比如禁止聊天機器人回答某些問題,,或是對個人數(shù)據(jù)進行隱私保護,。
世界各國政府都在制定人工智能法規(guī)
周二,美國總統(tǒng)喬·拜登(Joe Biden)在聯(lián)合國發(fā)表演講時也表達了類似的觀點,,他呼吁世界各國領導人攜手合作,,減輕人工智能帶來的“巨大危險”,同時確保人工智能仍“為善所用”,。
而在美國國內(nèi),,鑒于人工智能技術發(fā)展變化如此之快,參議院多數(shù)黨領袖查克·舒默(Chuck Schumer,,紐約州民主黨人)敦促國會議員迅速采取行動,,對人工智能進行監(jiān)管。上周,,舒默邀請了特斯拉(Tesla)首席執(zhí)行官埃隆·馬斯克(Eon Musk),、微軟(Microsoft)首席執(zhí)行官薩蒂亞·納德拉(Satya Nadella)和Alphabet首席執(zhí)行官桑達爾·皮查伊(Sundar Pichai)等大型科技公司高管到華盛頓開會,討論未來人工智能監(jiān)管問題,。一些國會議員對邀請硅谷高管討論旨在監(jiān)管其公司的政策的決定持懷疑態(tài)度,。
歐盟(最早監(jiān)管人工智能的政府機構(gòu)之一)在6月通過了一項立法草案,要求開發(fā)人員分享用于訓練模型的數(shù)據(jù),,并嚴格限制面部識別軟件的使用——蘇萊曼也表示應該限制這種軟件的使用,。《時代》雜志的一篇報道發(fā)現(xiàn),,ChatGPT的制造商OpenAI曾游說歐盟官員弱化其擬議立法的部分法條,。
中國也是最早出臺最完善的人工智能監(jiān)管法規(guī)的國家之一。今年7月,,中國國家互聯(lián)網(wǎng)信息辦公室發(fā)布了《生成式人工智能服務管理暫行辦法》,,包括明確要求遵守現(xiàn)行版權(quán)法,并規(guī)定哪些類型的開發(fā)需要政府批準。
蘇萊曼則堅信,,政府在未來的人工智能監(jiān)管中可以發(fā)揮關鍵作用,。“我熱愛民族國家,?!彼f?!拔蚁嘈疟O(jiān)管的力量,。我呼吁民族國家采取行動來解決這一問題。事關重大,,現(xiàn)在是時候采取行動了,。”(財富中文網(wǎng))
譯者:中慧言-王芳
Elon Musk has repeatedly referred to AI as a “civilizational risk.” Geoffrey Hinton, one of the founding fathers of AI research, changed his tune recently, calling AI an “existential threat.” And then there’s Mustafa Suleyman, cofounder of DeepMind, a firm formerly backed by Musk that has been on the scene for over a decade, and coauthor of the newly released “The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma.” One of the most prominent and longest-tenured experts in the field, he thinks such far-reaching concerns aren’t as pressing as others make them out to be, and in fact, the challenge from here on out is pretty straightforward.
The risks posed by AI have been front and center in public debates throughout 2023 since the technology vaulted into the public consciousness, becoming the subject of fascination in the press. “I just think that the existential-risk stuff has been a completely bonkers distraction,” Mustafa told MIT Technology Review last week. “There’s like 101 more practical issues that we should all be talking about, from privacy to bias to facial recognition to online moderation.”
The most pressing issue, in particular, should be regulation, he says. Suleyman is bullish on government’s across the world being able to effectively regulate AI. “I think everybody is having a complete panic that we’re not going to be able to regulate this,” Suleyman said. “It’s just nonsense. We’re totally going to be able to regulate it. We’ll apply the same frameworks that have been successful previously.”
His conviction is in part borne of the successful regulation of past technologies that were once considered cutting edge such as aviation and the internet. He argues: Without proper safety protocols for commercial flights, passengers would have never trusted airlines, which would have hurt business. On the internet, consumers can visit a myriad of sites but activities like selling drugs or promoting terrorism are banned—although not eliminated entirely.
On the other hand, as the Review‘s Will Douglas Heaven noted to Suleyman, some observers argue that current internet regulations are flawed and don’t sufficiently hold big tech companies accountable. In particular, Section 230 of the Communications Decency Act, one of the cornerstones of current internet legislation, which offers platforms safe harbor for content posted by third party users. It’s the foundation on which some of the biggest social media companies are built, shielding them from any liability for what gets shared on their websites. In February, the Supreme Court heard two cases that could alter the legislative landscape of the internet.
To bring AI regulation to fruition, Suleyman wants a combination of broad, international regulation to create new oversight institutions and smaller, more granular policies at the “micro level.” A first step that all aspiring AI regulators and developers can take is to limit “recursive self improvement” or AI’s ability to improve itself. Limiting this specific capability of artificial intelligence would be a critical first step to ensure that none of its future developments were made entirely without human oversight.
“You wouldn’t want to let your little AI go off and update its own code without you having oversight,” Suleyman said. “Maybe that should even be a licensed activity—you know, just like for handling anthrax or nuclear materials.”
Without governing some of the minutiae of AI, inducing at times the “actual code” used, legislators will have a hard time ensuring their laws are enforceable. “It’s about setting boundaries, limits that an AI can’t cross,” Suleyman says.
To make sure that happens, governments should be able to get “direct access” to AI developers to ensure they don’t cross whatever boundaries are eventually established. Some of those boundaries should be clearly marked, such as prohibiting chatbots to answer certain questions, or privacy protections for personal data.
Governments worldwide are working on AI regulations
During a speech at the UN Tuesday, President Joe Biden sounded a similar tune, calling for world leaders to work together to mitigate AI’s “enormous peril” while making sure it is still used “for good.”
And domestically, Senate majority leader Chuck Schumer (D-N.Y.) has urged lawmakers to move swiftly in regulating AI, given the rapid pace of change in the technology’s development. Last week, Schumer invited executives from the biggest tech companies including Tesla CEO Elon Eon Musk, Microsoft CEO Satya Nadella, and Alphabet CEO Sundar Pichai to Washington for a meeting to discuss prospective AI regulation. Some lawmakers were skeptical of the decision to invite executives from Silicon Valley to discuss the policies that would seek to regulate their companies.
One of the earliest governmental bodies to regulate AI was the European Union, which in June passed draft legislation requiring developers to share what data is used to train their models and severely restricting the use of facial recognition software—something Suleyman also said should be limited. A Time report found that OpenAI, which makes ChatGPT, lobbied EU officials to weaken some portions of their proposed legislation.
China has also been one of the earliest movers on sweeping AI legislation. In July, the Cyberspace Administration of China released interim measures for governing AI, including explicit requirements to adhere to existing copyright laws and establishing which types of developments would need government approval.
Suleyman for his part is convinced governments have a critical role to play in the future of AI regulations. “I love the nation-state,” he said. “I believe in the power of regulation. And what I’m calling for is action on the part of the nation-state to sort its shit out. Given what’s at stake, now is the time to get moving.”