亚色在线观看_亚洲人成a片高清在线观看不卡_亚洲中文无码亚洲人成频_免费在线黄片,69精品视频九九精品视频,美女大黄三级,人人干人人g,全新av网站每日更新播放,亚洲三及片,wwww无码视频,亚洲中文字幕无码一区在线

首頁(yè) 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專(zhuān)題 品牌中心
雜志訂閱

科技大佬熱議人工智能開(kāi)源:“你會(huì)將曼哈頓計(jì)劃開(kāi)源嗎,?”

Paolo Confino
2024-03-07

馬克·安德森堅(jiān)定支持開(kāi)源人工智能,,他認(rèn)為開(kāi)源人工智能是一種保障措施,可以避免少數(shù)幾家大型科技公司和政府部門(mén)掌控最前沿人工智能研究的準(zhǔn)入,。

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

維諾德·科斯拉與馬克·安德森都從公司創(chuàng)始人轉(zhuǎn)型為投資者,。上周末,兩人就通用人工智能開(kāi)發(fā)是否應(yīng)該開(kāi)源展開(kāi)辯論,。通用人工智能將使機(jī)器的智能水平與人類(lèi)相當(dāng),。

兩人爭(zhēng)論的導(dǎo)火索是科斯拉發(fā)布了一條帖子,對(duì)OpenAI及其首席執(zhí)行官薩姆·奧爾特曼大加贊揚(yáng),。

科斯拉寫(xiě)道:“我們從@OpenAI創(chuàng)立之初就認(rèn)識(shí)@sama,,我們完全支持他和他的公司。這些訴訟是開(kāi)發(fā)通用人工智能和實(shí)現(xiàn)其效益的巨大干擾,?!?/p>

對(duì)于科斯拉的信息,,安德森指責(zé)他“游說(shuō)禁止開(kāi)源”人工智能研究。

安德森似乎對(duì)科斯拉支持OpenAI的立場(chǎng)有不同意見(jiàn),,因?yàn)镺penAI已經(jīng)背離了最初的開(kāi)源原則,。自人工智能誕生以來(lái),安德森就堅(jiān)定支持開(kāi)源人工智能,,他認(rèn)為開(kāi)源人工智能是一種保障措施,,可以避免少數(shù)幾家大型科技公司和政府部門(mén)掌控最前沿人工智能研究的準(zhǔn)入。

在當(dāng)前的這場(chǎng)論戰(zhàn)和以前的言論中,,安德森對(duì)于一些最大的人工智能批評(píng)者提出的擔(dān)憂(yōu)嗤之以鼻,。安德森曾經(jīng)把這些擔(dān)憂(yōu)歸咎于對(duì)顛覆性和不確定性的恐懼,而不是技術(shù)本身具有惡意,,他在X上重申了這種觀點(diǎn),。

安德森在X上發(fā)帖稱(chēng):“每一種能夠提升人類(lèi)福祉的重大新技術(shù),都會(huì)引發(fā)虛假的道德恐慌,。人工智能則是最新的例子,。”

另一方面,,科斯拉卻透過(guò)地緣政治和國(guó)家安全的視角看待人工智能,,而不是從嚴(yán)格的創(chuàng)業(yè)角度來(lái)看待它。

對(duì)于安德森所說(shuō)的科斯拉不支持開(kāi)源的言論,,科斯拉回應(yīng)稱(chēng)開(kāi)源風(fēng)險(xiǎn)過(guò)高,。

他回復(fù)安德森稱(chēng):“你會(huì)將曼哈頓計(jì)劃(Manhattan Project)開(kāi)源嗎?人工智能對(duì)國(guó)家安全更加重要,。我們必須在人工智能領(lǐng)域取得勝利,。這是愛(ài)國(guó)主義,而不是口號(hào),?!?/p>

科斯拉和安德森之間的辯論,涉及到薩姆·奧爾特曼,、OpenAI面臨的訴訟和埃隆·馬斯克,。馬斯克后來(lái)也加入了論戰(zhàn)。兩人在辯論中還談到了是否應(yīng)該允許任何人進(jìn)行任何形式的人工智能研究,,或者是否應(yīng)該將最先進(jìn)的人工智能版本交給政府,。雖然這場(chǎng)辯論看起來(lái)只是一群非常成功的硅谷企業(yè)家在網(wǎng)絡(luò)上展開(kāi)的一些討論,但我們可以管中窺豹,,從中看到目前圍繞開(kāi)源人工智能展開(kāi)的關(guān)鍵辯論,。

最終,沒(méi)有任何一方希望徹底禁止開(kāi)源或閉源研究。在這場(chǎng)關(guān)于限制開(kāi)源研究的論戰(zhàn)中,,有部分主張?jiān)从趽?dān)心限制開(kāi)源研究被作為一種惡意的理由,,目的是確保對(duì)在人工智能領(lǐng)域已經(jīng)取得進(jìn)展的大公司實(shí)施監(jiān)管。傳奇人工智能研究人員,、Meta前首席人工智能科學(xué)家楊立昆(Yann LeCun)在X上加入論戰(zhàn)時(shí)表達(dá)了這種觀點(diǎn)。

他寫(xiě)道:“沒(méi)有人要求禁止閉源人工智能,。但有些人正在積極游說(shuō)各國(guó)政府禁止(或限制)開(kāi)源人工智能,。有些人以軍事和經(jīng)濟(jì)安全作為理由。還有人則提到了威脅人類(lèi)生存的幻想,?!?/p>

硅谷知名天使投資人羅恩·康威要求領(lǐng)先的人工智能公司承諾“開(kāi)發(fā)能夠改善生活,為人類(lèi)開(kāi)啟更美好未來(lái)的人工智能”,。迄今為止,,Meta、谷歌(Google),、微軟(Microsoft)和OpenAI都已經(jīng)在他的信中簽字,。

安德森引用科斯拉使用的曼哈頓項(xiàng)目的比喻,對(duì)OpenAI的安全保障提出了擔(dān)憂(yōu),。他認(rèn)為,,如果沒(méi)有與曼哈頓項(xiàng)目同等程度的安全保障,例如“嚴(yán)格的安全審查和許可流程”,、“持續(xù)的內(nèi)部監(jiān)控”和有“全天候武裝警衛(wèi)”守護(hù)的“牢不可破的物理設(shè)施”,,那么OpenAI最先進(jìn)的研究就將被美國(guó)的地緣政治對(duì)手們竊取。

OpenAI并未立即回復(fù)置評(píng)請(qǐng)求,。

但安德森似乎不只是在爭(zhēng)論一種觀點(diǎn),,而是在進(jìn)行思考練習(xí)。他在回應(yīng)自己的帖子時(shí)寫(xiě)道:“當(dāng)然,,每一個(gè)假設(shè)都很荒謬,。”

埃隆·馬斯克加入論戰(zhàn)批評(píng)OpenAI的安全措施

此時(shí),,OpenAI的聯(lián)合創(chuàng)始人埃隆·馬斯克加入了論戰(zhàn),。

馬斯克回復(fù)安德森討論OpenAI安全措施的帖子稱(chēng):“國(guó)家行為者當(dāng)然很容易盜取他們的知識(shí)產(chǎn)權(quán)?!?/p>

科斯拉也提到了馬斯克,,稱(chēng)馬斯克起訴OpenAI的決定是出于“酸葡萄心理”。上周,,馬斯克起訴OpenAI,,指控該初創(chuàng)公司違反了創(chuàng)業(yè)協(xié)議。馬斯克認(rèn)為,OpenAI與微軟的密切關(guān)系和停止將其產(chǎn)品開(kāi)源的決定,,違背了公司創(chuàng)立的使命,。據(jù)彭博社(Bloomberg)掌握的一份備忘錄顯示,OpenAI與科斯拉的觀點(diǎn)類(lèi)似,,指責(zé)馬斯克“后悔沒(méi)有參與公司今天的發(fā)展”,。

馬斯克回應(yīng)稱(chēng),科斯拉說(shuō)他后悔在2019年離開(kāi)OpenAI根本是“不知所云”,。

科斯拉的科斯拉風(fēng)險(xiǎn)投資公司(Khosla Ventures)是OpenAI的長(zhǎng)期投資者,。2019年,科斯拉風(fēng)險(xiǎn)投資公司在OpenAI投資5,000萬(wàn)美元,。同樣,,他并不認(rèn)同馬斯克的訴訟??扑估赬上發(fā)帖稱(chēng):“有人說(shuō),,如果你沒(méi)有創(chuàng)新能力,那就去提起訴訟,,這就是目前的狀況,。”他在帖子中標(biāo)記了馬斯克和OpenAI,。

馬斯克加入之后,,論戰(zhàn)仍在繼續(xù)??扑估琅f堅(jiān)定地認(rèn)為,,人工智能比發(fā)明核彈更重要,因此不能完全開(kāi)源,,但他認(rèn)同馬斯克和安德森的主張,,認(rèn)為領(lǐng)先的人工智能公司應(yīng)該采取更嚴(yán)格的安全措施,甚至可以向政府求助,。

科斯拉寫(xiě)道:“我同意應(yīng)該為所有[最先進(jìn)的]人工智能提供國(guó)家網(wǎng)絡(luò)安全幫助和保護(hù),,而且這必不可少。人工智能不僅關(guān)乎網(wǎng)絡(luò)防御,,還關(guān)乎誰(shuí)將在全球經(jīng)濟(jì)和政治競(jìng)爭(zhēng)中勝出,。全球價(jià)值觀和政治體制的未來(lái)都取決于人工智能?!?/p>

雖然科斯拉對(duì)于把所有人工智能研究開(kāi)源持保留態(tài)度,,但他表示不希望人工智能停止發(fā)展。他在回應(yīng)安德森時(shí)表示:“[最先進(jìn)的]人工智能不應(yīng)該放慢開(kāi)發(fā)速度,,因?yàn)樵谖铱磥?lái),,敵對(duì)國(guó)家的危險(xiǎn)程度更高,。”

但在“人工智能對(duì)齊”這個(gè)問(wèn)題上,,科斯拉和安德森卻找到了一些共同點(diǎn),。“人工智能對(duì)齊”是指開(kāi)發(fā)人工智能技術(shù)所使用的模型中的意識(shí)形態(tài),、原則和道德觀,。(財(cái)富中文網(wǎng))

譯者:劉進(jìn)龍

審校:汪皓

維諾德·科斯拉與馬克·安德森都從公司創(chuàng)始人轉(zhuǎn)型為投資者。上周末,,兩人就通用人工智能開(kāi)發(fā)是否應(yīng)該開(kāi)源展開(kāi)辯論,。通用人工智能將使機(jī)器的智能水平與人類(lèi)相當(dāng)。

兩人爭(zhēng)論的導(dǎo)火索是科斯拉發(fā)布了一條帖子,,對(duì)OpenAI及其首席執(zhí)行官薩姆·奧爾特曼大加贊揚(yáng),。

科斯拉寫(xiě)道:“我們從@OpenAI創(chuàng)立之初就認(rèn)識(shí)@sama,,我們完全支持他和他的公司,。這些訴訟是開(kāi)發(fā)通用人工智能和實(shí)現(xiàn)其效益的巨大干擾?!?/p>

對(duì)于科斯拉的信息,,安德森指責(zé)他“游說(shuō)禁止開(kāi)源”人工智能研究。

安德森似乎對(duì)科斯拉支持OpenAI的立場(chǎng)有不同意見(jiàn),,因?yàn)镺penAI已經(jīng)背離了最初的開(kāi)源原則,。自人工智能誕生以來(lái),安德森就堅(jiān)定支持開(kāi)源人工智能,,他認(rèn)為開(kāi)源人工智能是一種保障措施,,可以避免少數(shù)幾家大型科技公司和政府部門(mén)掌控最前沿人工智能研究的準(zhǔn)入。

在當(dāng)前的這場(chǎng)論戰(zhàn)和以前的言論中,,安德森對(duì)于一些最大的人工智能批評(píng)者提出的擔(dān)憂(yōu)嗤之以鼻,。安德森曾經(jīng)把這些擔(dān)憂(yōu)歸咎于對(duì)顛覆性和不確定性的恐懼,而不是技術(shù)本身具有惡意,,他在X上重申了這種觀點(diǎn),。

安德森在X上發(fā)帖稱(chēng):“每一種能夠提升人類(lèi)福祉的重大新技術(shù),都會(huì)引發(fā)虛假的道德恐慌,。人工智能則是最新的例子,。”

另一方面,,科斯拉卻透過(guò)地緣政治和國(guó)家安全的視角看待人工智能,,而不是從嚴(yán)格的創(chuàng)業(yè)角度來(lái)看待它。

對(duì)于安德森所說(shuō)的科斯拉不支持開(kāi)源的言論,,科斯拉回應(yīng)稱(chēng)開(kāi)源風(fēng)險(xiǎn)過(guò)高,。

他回復(fù)安德森稱(chēng):“你會(huì)將曼哈頓計(jì)劃(Manhattan Project)開(kāi)源嗎,?人工智能對(duì)國(guó)家安全更加重要。我們必須在人工智能領(lǐng)域取得勝利,。這是愛(ài)國(guó)主義,,而不是口號(hào)?!?/p>

科斯拉和安德森之間的辯論,,涉及到薩姆·奧爾特曼、OpenAI面臨的訴訟和埃隆·馬斯克,。馬斯克后來(lái)也加入了論戰(zhàn),。兩人在辯論中還談到了是否應(yīng)該允許任何人進(jìn)行任何形式的人工智能研究,或者是否應(yīng)該將最先進(jìn)的人工智能版本交給政府,。雖然這場(chǎng)辯論看起來(lái)只是一群非常成功的硅谷企業(yè)家在網(wǎng)絡(luò)上展開(kāi)的一些討論,,但我們可以管中窺豹,從中看到目前圍繞開(kāi)源人工智能展開(kāi)的關(guān)鍵辯論,。

最終,,沒(méi)有任何一方希望徹底禁止開(kāi)源或閉源研究。在這場(chǎng)關(guān)于限制開(kāi)源研究的論戰(zhàn)中,,有部分主張?jiān)从趽?dān)心限制開(kāi)源研究被作為一種惡意的理由,,目的是確保對(duì)在人工智能領(lǐng)域已經(jīng)取得進(jìn)展的大公司實(shí)施監(jiān)管。傳奇人工智能研究人員,、Meta前首席人工智能科學(xué)家楊立昆(Yann LeCun)在X上加入論戰(zhàn)時(shí)表達(dá)了這種觀點(diǎn),。

他寫(xiě)道:“沒(méi)有人要求禁止閉源人工智能。但有些人正在積極游說(shuō)各國(guó)政府禁止(或限制)開(kāi)源人工智能,。有些人以軍事和經(jīng)濟(jì)安全作為理由,。還有人則提到了威脅人類(lèi)生存的幻想?!?/p>

硅谷知名天使投資人羅恩·康威要求領(lǐng)先的人工智能公司承諾“開(kāi)發(fā)能夠改善生活,,為人類(lèi)開(kāi)啟更美好未來(lái)的人工智能”。迄今為止,,Meta,、谷歌(Google)、微軟(Microsoft)和OpenAI都已經(jīng)在他的信中簽字,。

安德森引用科斯拉使用的曼哈頓項(xiàng)目的比喻,,對(duì)OpenAI的安全保障提出了擔(dān)憂(yōu)。他認(rèn)為,,如果沒(méi)有與曼哈頓項(xiàng)目同等程度的安全保障,,例如“嚴(yán)格的安全審查和許可流程”、“持續(xù)的內(nèi)部監(jiān)控”和有“全天候武裝警衛(wèi)”守護(hù)的“牢不可破的物理設(shè)施”,,那么OpenAI最先進(jìn)的研究就將被美國(guó)的地緣政治對(duì)手們竊取,。

OpenAI并未立即回復(fù)置評(píng)請(qǐng)求,。

但安德森似乎不只是在爭(zhēng)論一種觀點(diǎn),而是在進(jìn)行思考練習(xí),。他在回應(yīng)自己的帖子時(shí)寫(xiě)道:“當(dāng)然,,每一個(gè)假設(shè)都很荒謬?!?/p>

埃隆·馬斯克加入論戰(zhàn)批評(píng)OpenAI的安全措施

此時(shí),,OpenAI的聯(lián)合創(chuàng)始人埃隆·馬斯克加入了論戰(zhàn)。

馬斯克回復(fù)安德森討論OpenAI安全措施的帖子稱(chēng):“國(guó)家行為者當(dāng)然很容易盜取他們的知識(shí)產(chǎn)權(quán),?!?/p>

科斯拉也提到了馬斯克,稱(chēng)馬斯克起訴OpenAI的決定是出于“酸葡萄心理”,。上周,,馬斯克起訴OpenAI,指控該初創(chuàng)公司違反了創(chuàng)業(yè)協(xié)議,。馬斯克認(rèn)為,,OpenAI與微軟的密切關(guān)系和停止將其產(chǎn)品開(kāi)源的決定,違背了公司創(chuàng)立的使命,。據(jù)彭博社(Bloomberg)掌握的一份備忘錄顯示,,OpenAI與科斯拉的觀點(diǎn)類(lèi)似,,指責(zé)馬斯克“后悔沒(méi)有參與公司今天的發(fā)展”,。

馬斯克回應(yīng)稱(chēng),科斯拉說(shuō)他后悔在2019年離開(kāi)OpenAI根本是“不知所云”,。

科斯拉的科斯拉風(fēng)險(xiǎn)投資公司(Khosla Ventures)是OpenAI的長(zhǎng)期投資者,。2019年,科斯拉風(fēng)險(xiǎn)投資公司在OpenAI投資5,000萬(wàn)美元,。同樣,,他并不認(rèn)同馬斯克的訴訟??扑估赬上發(fā)帖稱(chēng):“有人說(shuō),,如果你沒(méi)有創(chuàng)新能力,那就去提起訴訟,,這就是目前的狀況,。”他在帖子中標(biāo)記了馬斯克和OpenAI,。

馬斯克加入之后,,論戰(zhàn)仍在繼續(xù)??扑估琅f堅(jiān)定地認(rèn)為,,人工智能比發(fā)明核彈更重要,,因此不能完全開(kāi)源,但他認(rèn)同馬斯克和安德森的主張,,認(rèn)為領(lǐng)先的人工智能公司應(yīng)該采取更嚴(yán)格的安全措施,,甚至可以向政府求助。

科斯拉寫(xiě)道:“我同意應(yīng)該為所有[最先進(jìn)的]人工智能提供國(guó)家網(wǎng)絡(luò)安全幫助和保護(hù),,而且這必不可少,。人工智能不僅關(guān)乎網(wǎng)絡(luò)防御,還關(guān)乎誰(shuí)將在全球經(jīng)濟(jì)和政治競(jìng)爭(zhēng)中勝出,。全球價(jià)值觀和政治體制的未來(lái)都取決于人工智能,。”

雖然科斯拉對(duì)于把所有人工智能研究開(kāi)源持保留態(tài)度,,但他表示不希望人工智能停止發(fā)展,。他在回應(yīng)安德森時(shí)表示:“[最先進(jìn)的]人工智能不應(yīng)該放慢開(kāi)發(fā)速度,因?yàn)樵谖铱磥?lái),,敵對(duì)國(guó)家的危險(xiǎn)程度更高,。”

但在“人工智能對(duì)齊”這個(gè)問(wèn)題上,,科斯拉和安德森卻找到了一些共同點(diǎn),。“人工智能對(duì)齊”是指開(kāi)發(fā)人工智能技術(shù)所使用的模型中的意識(shí)形態(tài),、原則和道德觀,。(財(cái)富中文網(wǎng))

譯者:劉進(jìn)龍

審校:汪皓

Vinod Khosla and Marc Andreessen, both founders turned investors, spent part of their weekends debating each other on whether the pursuit of artificial general intelligence—the idea that a machine could become as smart as a human—should be open-source.

The debate kicked off with a post from Khosla praising OpenAI and Sam Altman, the company’s CEO.

“We have known @sama since the early days of @OpenAI and fully support him and the company,” Khosla wrote. “These lawsuits are a massive distraction from the goals of getting to AGI and its benefits.”

Andreessen responded to Khosla’s message by accusing him of “l(fā)obbying to ban open source” research in AI.

Andreessen seemed to take issue with Khosla’s support for OpenAI because the firm has walked away from its previous open-source ethos. Since the advent of AI, Andreessen has come out as a big supporter of open-source AI, advocating it as a means to safeguard against a select few Big Tech firms and government agencies controlling access to the most cutting-edge AI research.

Both in this debate and in the past, Andreessen has been dismissive of the concerns raised by some of AI’s biggest critics. Andreessen has previously chalked up these worries to fears of disruption and uncertainty rather than the technology being malicious in and of itself—a point he reiterated in his exchange on X.

“Every significant new technology that advances human well-being is greeted by a ginned-up moral panic,” Andreessen posted on X. “This is just the latest.”

Khosla, on the other hand, tends to look at AI through a geopolitical and national-security lens rather than through a strictly entrepreneurial one.

In responding to Andreesen’s claims that he isn’t in favor of open-source, Khosla said the stakes were too high.

“Would you open source the Manhattan Project?” Khosla replied to Andreessen. “This one is more serious for national security. We are in a tech economic war with China and AI that is a must win. This is exactly what patriotism is about, not slogans.”

The back-and-forth discussion between Khosla and Andreessen saw the two opine on Sam Altman, OpenAI’s lawsuits, and Elon Musk, who chimed in himself at one point. The debate also explored whether anyone should be allowed to pursue any form of AI research, or if its most advanced versions should be delegated to the government. So while it may have seemed like just some online sniping between a group of extraordinarily successful Silicon Valley entrepreneurs, it contained a microcosm of the ongoing and critical debate around open-source AI.

Ultimately, neither camp wants to thoroughly ban open- or closed-source research. But part of the debate around limiting open-source research hinges on concerns it is being co-opted as a bad-faith argument to ensure regulatory capture for the biggest companies already making headway on AI—a point that legendary AI researcher and Meta’s former chief AI scientist Yann LeCun made when he entered the fray on X.

“No one is asking for closed-source AI to be banned,” LeCun wrote. “But some people are heavily lobbying governments around the world to ban (or limit) open source AI. Some of those people invoke military and economic security. Others invoke the fantasy of existential risk.”

Elsewhere in Silicon Valley, famed angel investor Ron Conway asked leading AI companies to pledge to “building AI that improves lives and unlocks a better future for humanity.” So far he has enlisted the likes of Meta, Google, Microsoft, and OpenAI as signatories to the letter.

Andreessen, sticking with Khosla’s Manhattan Project analogy, raised concerns about OpenAI’s safety protocols. He believes without the same level of security that surrounded the Manhattan Project—such as a “rigorous security vetting and clearance process,” “constant internal surveillance,” and “hardened physical facilities” with “24×7 armed guards”—OpenAI’s most advanced research could be stolen by the U.S.’s geopolitical rivals.

OpenAI did not immediately respond to a request for comment.

Andreessen, though, appears to have been doing more of a thought exercise than arguing a point, writing in response to his own post, “Of course every part of this is absurd.”

Elon Musk enters the debate to criticize OpenAI’s security

At this point, OpenAI cofounder Elon Musk chimed in.

“It would certainly be easy for a state actor to steal their IP,” Musk replied to Andreessen’s post about security at OpenAI.

Khosla, too, made mention of Musk, calling his decision to sue OpenAI “sour grapes.” Last week, Musk filed a lawsuit against OpenAI, alleging it breached the startup’s founding agreement. According to Musk, OpenAI’s close relationship with Microsoft and its decision to stop making its work open-source violated the organization’s mission. OpenAI took a similar tack to Khosla, accusing Musk of having “regrets about not being involved with the company today,” according to a memo obtained by Bloomberg.

Musk responded by saying Khosla “doesn’t know what he is talking about,” regarding his departure from OpenAI in 2019.

Khosla’s venture capital firm Khosla Ventures is a longtime backer of OpenAI. In 2019, Khosla Ventures invested $50 million into OpenAI. As such, he didn’t take kindly to Musk’s lawsuit. “Like they say if you can’t innovate, litigate and that’s what we have here,” Khosla wrote on X, tagging both Musk and OpenAI.

With Musk now involved, the debate continued. Khosla remained adamant AI was more important than the invention of the nuclear bomb and therefore couldn’t afford to be entirely open-source—though he did agree with Musk and Andreessen that its top firms should have more rigorous security measures, even relying on the government for assistance.

“Agree national cyber help and protection should be given and required for all [state of the art] AI,” Khosla wrote. “AI is not just cyber defense but also about winning economically and politically globally. The future of the world’s values and political system depends on it.”

Despite his reservations about making all of AI research open-source, Khosla said he did not want development to halt. “[State of the art] AI should not be slowed because enemy nation states are orders of magnitude bigger danger in my view,” Khosla said in response to Andreessen.

But Khosla and Andreessen did find some common ground on the question of AI alignment, which refers to the set of ideologies, principles, and ethics that will inform the models on which AI technologies are developed.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專(zhuān)屬所有或持有。未經(jīng)許可,,禁止進(jìn)行轉(zhuǎn)載,、摘編、復(fù)制及建立鏡像等任何使用,。
0條Plus
精彩評(píng)論
評(píng)論

撰寫(xiě)或查看更多評(píng)論

請(qǐng)打開(kāi)財(cái)富Plus APP

前往打開(kāi)
熱讀文章