Sinovation Venture首席執(zhí)行官,、谷歌中國(guó)前總裁李開復(fù)表示,隨著企業(yè)開始推進(jìn)低級(jí)服務(wù)工作的自動(dòng)化進(jìn)程,,他們或?qū)⒅圃臁疤摷偃蝿?wù)”,,以測(cè)試員工是否勝任高級(jí)職位,。
李開復(fù)在Collective[i]主辦的一場(chǎng)線上活動(dòng)上說:“我們可能需要這樣一個(gè)世界——在這個(gè)世界里,,人們‘假裝在工作’,但實(shí)際上他們正在被評(píng)估向上流動(dòng)的可能性,?!盋ollective[i]是一家將人工智能應(yīng)用于銷售和客戶關(guān)系管理系統(tǒng)的公司。
企業(yè)高層的工作需要更深入,、更有創(chuàng)造性的思考,,自動(dòng)化難以做到,必須由人類來完成,。但是,,如果入門級(jí)工作全部能夠由自動(dòng)化完成,企業(yè)就沒有理由雇傭和培養(yǎng)年輕人才,。因此,,李開復(fù)認(rèn)為,企業(yè)們將需要找到一種新方法來招聘入門級(jí)員工,,并打造一條晉升之路,。
以上是李開復(fù)對(duì)人工智能系統(tǒng)的廣泛應(yīng)用可能產(chǎn)生的社會(huì)影響做出的幾項(xiàng)預(yù)測(cè)之一,。其中一些摘自他即將出版的新書《人工智能2041:我們未來的十個(gè)愿景》(AI 2041: Ten Visions for Our future),這是他與科幻作家陳楸帆合作編寫的10個(gè)短篇小說集,,闡述了人工智能改變個(gè)人和組織的可能方式,。李開復(fù)開玩笑說:“這幾乎是書版的《黑鏡》(Black Mirror),只不過形式更具建設(shè)性,?!崩铋_復(fù)是人工智能和機(jī)器學(xué)習(xí)領(lǐng)域的知名專家,曾經(jīng)于2018年出版了《人工智能超級(jí)力量:中國(guó),、硅谷和世界新秩序》(AI Superpowers: China, Silicon Valley, and the New World Order)一書,。
提到人工智能在社會(huì)行為中扮演的角色時(shí),人們關(guān)注最多的是算法傾向于反映,、加劇現(xiàn)有的社會(huì)偏見,。例如,推特在一場(chǎng)旨在消除算法偏見的競(jìng)賽中發(fā)現(xiàn),,它的圖像裁剪工具優(yōu)先留下較瘦的白人女性,,而不是其他人種的人。由數(shù)據(jù)驅(qū)動(dòng)的模型存在加劇社會(huì)不平等的風(fēng)險(xiǎn),,尤其是當(dāng)越來越多的個(gè)人,、企業(yè)、政府部門依靠數(shù)據(jù)庫來作出決定,。正如李開復(fù)所指出的,,當(dāng)一家“公司擁有相當(dāng)多的權(quán)力和數(shù)據(jù)時(shí),即使它在表面上優(yōu)化了算法結(jié)構(gòu),,迎合了用戶興趣,,它仍然可能做一些對(duì)社會(huì)非常有害的事情”。
盡管人工智能存在潛在性傷害,,但李開復(fù)相信,,開發(fā)人員和人工智能技術(shù)人員可以進(jìn)行自我約束。他支持指標(biāo)的開發(fā),,以幫助企業(yè)判斷其人工智能系統(tǒng)的性能,,其方式類似于通過環(huán)境、社會(huì)和公司治理(ESG)指標(biāo)來確定企業(yè)的表現(xiàn),?!澳阒恍枰峁┛煽康姆椒ǎ屵@類人工智能倫理成為可定期衡量,、可付諸實(shí)施的東西,。”他說,。
但是李開復(fù)指出,,為了訓(xùn)練程序員們,,還需要做更多的工作,比如開發(fā)一些能夠幫助“監(jiān)測(cè)潛在的偏見問題”的工具,。更廣泛地說,,他建議人工智能工程師采用“類似于醫(yī)生培訓(xùn)中的希波克拉底誓言”的東西——所謂“希波克拉底誓言”,是指醫(yī)生們與病人們打交道時(shí)所需的職業(yè)道德,,常被概括為“不傷害”原則,。
“從事人工智能工作的人需要意識(shí)到,他們?cè)诰幊虝r(shí)對(duì)人們的生活負(fù)有巨大的責(zé)任,?!崩铋_復(fù)說,“這不僅僅是為他們的互聯(lián)網(wǎng)公司雇主賺多少錢的問題,?!保ㄘ?cái)富中文網(wǎng))
編譯:楊二一
Sinovation Venture首席執(zhí)行官、谷歌中國(guó)前總裁李開復(fù)表示,,隨著企業(yè)開始推進(jìn)低級(jí)服務(wù)工作的自動(dòng)化進(jìn)程,,他們或?qū)⒅圃臁疤摷偃蝿?wù)”,以測(cè)試員工是否勝任高級(jí)職位,。
李開復(fù)在Collective[i]主辦的一場(chǎng)線上活動(dòng)上說:“我們可能需要這樣一個(gè)世界——在這個(gè)世界里,,人們‘假裝在工作’,但實(shí)際上他們正在被評(píng)估向上流動(dòng)的可能性,?!盋ollective[i]是一家將人工智能應(yīng)用于銷售和客戶關(guān)系管理系統(tǒng)的公司。
企業(yè)高層的工作需要更深入,、更有創(chuàng)造性的思考,,自動(dòng)化難以做到,必須由人類來完成,。但是,,如果入門級(jí)工作全部能夠由自動(dòng)化完成,,企業(yè)就沒有理由雇傭和培養(yǎng)年輕人才,。因此,李開復(fù)認(rèn)為,,企業(yè)們將需要找到一種新方法來招聘入門級(jí)員工,,并打造一條晉升之路。
以上是李開復(fù)對(duì)人工智能系統(tǒng)的廣泛應(yīng)用可能產(chǎn)生的社會(huì)影響做出的幾項(xiàng)預(yù)測(cè)之一,。其中一些摘自他即將出版的新書《人工智能2041:我們未來的十個(gè)愿景》(AI 2041: Ten Visions for Our future),,這是他與科幻作家陳楸帆合作編寫的10個(gè)短篇小說集,闡述了人工智能改變個(gè)人和組織的可能方式,。李開復(fù)開玩笑說:“這幾乎是書版的《黑鏡》(Black Mirror),,只不過形式更具建設(shè)性,。”李開復(fù)是人工智能和機(jī)器學(xué)習(xí)領(lǐng)域的知名專家,,曾經(jīng)于2018年出版了《人工智能超級(jí)力量:中國(guó),、硅谷和世界新秩序》(AI Superpowers: China, Silicon Valley, and the New World Order)一書。
提到人工智能在社會(huì)行為中扮演的角色時(shí),,人們關(guān)注最多的是算法傾向于反映,、加劇現(xiàn)有的社會(huì)偏見。例如,,推特在一場(chǎng)旨在消除算法偏見的競(jìng)賽中發(fā)現(xiàn),,它的圖像裁剪工具優(yōu)先留下較瘦的白人女性,而不是其他人種的人,。由數(shù)據(jù)驅(qū)動(dòng)的模型存在加劇社會(huì)不平等的風(fēng)險(xiǎn),,尤其是當(dāng)越來越多的個(gè)人、企業(yè),、政府部門依靠數(shù)據(jù)庫來作出決定,。正如李開復(fù)所指出的,當(dāng)一家“公司擁有相當(dāng)多的權(quán)力和數(shù)據(jù)時(shí),,即使它在表面上優(yōu)化了算法結(jié)構(gòu),,迎合了用戶興趣,它仍然可能做一些對(duì)社會(huì)非常有害的事情”,。
盡管人工智能存在潛在性傷害,,但李開復(fù)相信,開發(fā)人員和人工智能技術(shù)人員可以進(jìn)行自我約束,。他支持指標(biāo)的開發(fā),,以幫助企業(yè)判斷其人工智能系統(tǒng)的性能,其方式類似于通過環(huán)境,、社會(huì)和公司治理(ESG)指標(biāo)來確定企業(yè)的表現(xiàn),。“你只需要提供可靠的方法,,讓這類人工智能倫理成為可定期衡量,、可付諸實(shí)施的東西?!彼f,。
但是李開復(fù)指出,為了訓(xùn)練程序員們,,還需要做更多的工作,,比如開發(fā)一些能夠幫助“監(jiān)測(cè)潛在的偏見問題”的工具。更廣泛地說,他建議人工智能工程師采用“類似于醫(yī)生培訓(xùn)中的希波克拉底誓言”的東西——所謂“希波克拉底誓言”,,是指醫(yī)生們與病人們打交道時(shí)所需的職業(yè)道德,,常被概括為“不傷害”原則。
“從事人工智能工作的人需要意識(shí)到,,他們?cè)诰幊虝r(shí)對(duì)人們的生活負(fù)有巨大的責(zé)任,。”李開復(fù)說,,“這不僅僅是為他們的互聯(lián)網(wǎng)公司雇主賺多少錢的問題,。”(財(cái)富中文網(wǎng))
編譯:楊二一
As businesses begin to automate low-level service work, companies may start creating fake tasks to test employee suitability for senior positions, says Kai-Fu Lee, the CEO of Sinovation Ventures and former president of Google China.
“We may need to have a world in which people have ‘the pretense of working,’ but actually they’re being evaluated for upward mobility,” Lee said at a virtual event hosted by Collective[i], a company that applies A.I. to sales and CRM systems.
Work at higher levels of a company, which requires deeper and more creative thinking, is harder to automate and must be completed by humans. But if entry-level work is fully automated, companies don't have a reason to hire and groom young talent. So, Lee says, companies will need to find a new way to hire entry-level employees and build a path for promotion.
It was one of several predictions Lee made about the possible social effects of widespread adoption of A.I. systems. Some were drawn from his upcoming book, AI 2041: Ten Visions for Our Future—a collection of 10 short stories, written in partnership with science fiction author Chen Qiufan, that illustrate ways that A.I. might change individuals and organizations. “Almost a book version of Black Mirror in a more constructive format,” joked Lee, a well-known expert in the field of A.I. and machine learning and author of the 2018 book AI Superpowers: China, Silicon Valley, and the New World Order.
Talk of A.I. and its role in social behavior often centers on the tendency of algorithms to reflect and exacerbate existing social biases. For example, a contest by Twitter to root out bias in its algorithms found that its image-cropping model prioritized thinner white women over people of other demographics. Data-driven models risk reinforcing social inequality, especially as more individuals, companies, and governments rely on them to make consequential decisions. As Lee noted, when a “company has too much power and data, [even if] it’s optimizing an objective function that’s ostensibly with the user interest [in mind], it could still do things that could be very bad for the society.”
Despite the potential for A.I. to do harm, Lee has faith in developers and A.I. technicians to self-regulate. He supported the development of metrics to help companies judge the performance of their A.I. systems, in a manner similar to the measurements used to determine a firm's performance against environmental, social, and corporate governance (ESG) indicators. "You just need to provide solid ways for these types of A.I. ethics to become regularly measured things and become actionable."
Yet he noted that more work needs to be done to train programmers, including the creation of tools to help "detect potential issues with bias." More broadly, he suggested that A.I. engineers adopt something "similar to the Hippocratic oath in medical training," referring to the set of professional ethics that doctors adhere to during their dealings with patients, most commonly summarized as "Do no harm."
“People working on A.I. need to realize the massive responsibilities they have on people’s lives when they program," Lee said. "It’s not just a matter of making more money for the Internet company that they work for.”