亚色在线观看_亚洲人成a片高清在线观看不卡_亚洲中文无码亚洲人成频_免费在线黄片,69精品视频九九精品视频,美女大黄三级,人人干人人g,全新av网站每日更新播放,亚洲三及片,wwww无码视频,亚洲中文字幕无码一区在线

立即打開
人工智能需要什么,?同理心和監(jiān)管

人工智能需要什么,?同理心和監(jiān)管

McKenna Moore 2018年12月18日
隨著此項(xiàng)技術(shù)變得越發(fā)普及,應(yīng)該把監(jiān)管和同理心擺在首要位置。

AI革命就要來(lái)了,。

采用機(jī)器學(xué)習(xí)這一關(guān)鍵人工智能技術(shù)的公司數(shù)量已經(jīng)超過(guò)大家的想象。隨著此項(xiàng)技術(shù)變得越發(fā)普及,,應(yīng)該把監(jiān)管和同理心擺在首要位置,。

上周三,在加州拉古納尼格召開的《財(cái)富》2018年最具影響力下一代女性峰會(huì)上,,情感人工智能公司Affectiva的聯(lián)合創(chuàng)始人兼首席執(zhí)行官拉娜·埃爾·卡柳比表示,,在技術(shù)領(lǐng)域里,情商和智商同樣重要,。她指出,,考慮到人與技術(shù)互動(dòng)的頻率以及技術(shù)在人們生活中越來(lái)越大的影響,把同理心融入技術(shù)之中很重要,。

埃爾·卡柳比說(shuō),,要做到這一點(diǎn),,途徑之一就是讓多元化團(tuán)隊(duì)來(lái)進(jìn)行技術(shù)開發(fā)。對(duì)此她舉了個(gè)例子:中年白人男性在創(chuàng)建和訓(xùn)練面部識(shí)別AI時(shí)用的都是和他們長(zhǎng)相類似的照片,,這就意味著這樣的AI在面對(duì)有色人種女性時(shí)往往不能正常發(fā)揮作用,,甚至根本派不上用場(chǎng)。

“這要?dú)w結(jié)到設(shè)計(jì)相關(guān)算法的團(tuán)隊(duì)身上,,如果不是多元化團(tuán)隊(duì),,他們就不會(huì)考慮這樣的AI在戴頭巾的女性面前表現(xiàn)如何,?!卑枴た日f(shuō):“他們只是解決了自己所知范圍內(nèi)的問(wèn)題?!?/p>

微軟AI部門的首席產(chǎn)品負(fù)責(zé)人納維里娜·辛格說(shuō),,她在一個(gè)電子商務(wù)網(wǎng)站項(xiàng)目中遇到過(guò)一個(gè)完美的以同理心思維開發(fā)技術(shù)的例子。這個(gè)網(wǎng)站想讓印度消費(fèi)者更方便地購(gòu)買他們的商品,。由于印度的識(shí)字率較低,,這家公司就為不識(shí)字的用戶提供了語(yǔ)音轉(zhuǎn)文字功能。他們事先統(tǒng)一行動(dòng),,用印度各地的方言和文化對(duì)AI進(jìn)行了訓(xùn)練,,原因是在不同背景下,用戶說(shuō)話時(shí)表達(dá)的意圖和內(nèi)容都不一樣,。IBM沃森部門的客戶關(guān)系總經(jīng)理Inhi Cho Suh認(rèn)為,,意圖識(shí)別是目前AI面臨的最主要挑戰(zhàn)和機(jī)遇之一。

現(xiàn)在,,機(jī)器學(xué)習(xí)的另一大焦點(diǎn)是監(jiān)管,。與會(huì)者認(rèn)為,隨著機(jī)器人和其他相關(guān)技術(shù)日臻完善,,必需有法律來(lái)制約這種力量,。Suh指出,應(yīng)通過(guò)技術(shù)和監(jiān)管來(lái)防止不正當(dāng)使用,。埃爾·卡柳比則強(qiáng)調(diào),,大學(xué)計(jì)算機(jī)科學(xué)和工程專業(yè)學(xué)生必須接受倫理教育。

辛格提出用F.A.T.E.這個(gè)縮略語(yǔ)來(lái)代表開發(fā)和監(jiān)管此類技術(shù)時(shí)應(yīng)注意的關(guān)鍵問(wèn)題,。它們是公平(fairness),、負(fù)責(zé)(accountability)、透明(transparency)和倫理(ethics),。反恐技術(shù)公司Moonshot CVE的創(chuàng)始人維迪亞·拉瑪林漢姆說(shuō),,雖然有很多關(guān)于AI技術(shù)的負(fù)面新聞,比如英國(guó)政治咨詢機(jī)構(gòu)Cambridge Analytica非法獲取8700萬(wàn)Facebook用戶數(shù)據(jù)的丑聞,,但我們不能讓恐懼主導(dǎo)輿論,。

她指出:“出臺(tái)政策的動(dòng)機(jī)不應(yīng)該是害怕,而是應(yīng)該在掌握相關(guān)知識(shí)并了解相關(guān)信息后制定政策?!保ㄘ?cái)富中文網(wǎng))

譯者:Charlie

審校:夏林

The AI revolution is upon us.

Machine learning, one of key artificial intelligence technologies, has already been deployed within more companies than you would expect. As it gains even greater adoption, regulation and empathy should be at the forefront.

Rana el Kaliouby, co-founder and CEO of emotional AI company Affectiva, said at Fortune’s Most Powerful Women Next Gen 2018 in Laguna Niguel, Calif. on last Wednesday that EQ is just as important in technology as IQ. Because of the frequency with which people interact with technology and its growing impact on our lives, it’s important that empathy be built into it, she said.

One way to do that, el Kaliouby said, is to have diverse teams work on the technology. In a example of the problem, she said that middle-aged white men usually create and train face recognition AI using images of people who look like themselves, which means the technology often doesn’t work as well, if at all, on women of color.

“It goes back to the teams designing these algorithms, and if your team isn’t diverse they aren’t going to be thinking about how this will work on a woman wearing hijab,” she said. “You solve for the problems you know.”

Navrina Singh, principal product lead of Microsoft AI, said that a perfect example of building technology with empathy in mind came to her during a project with an e-commerce site that trying to make it easier for customers in India to buy it products. Due to the low literacy rate in the country, the company built speech-to-text functionality for users who couldn’t read. Beforehand, the company made a concerted effort to train its AI in dialects and cultures from all around India, because the intent and meaning of speech varies based on background. Deciphering intent is one of the greatest challenges and opportunities in AI right now, Inhi Cho Suh, general manager of customer engagement at IBM Watson, said.

Regulation is another big topic in machine learning at the moment. With bots and other related technology becoming more sophisticated, laws are necessary to check that power, the panelists agreed. Suh said that technology and regulation should be used to prevent misuse, while el Kaliouby stressed the need for mandatory ethics training for college computer science and engineering majors.

Singh shared the acronym F.A.T.E., which stands for fairness, accountability, transparency and ethics, to sum up the key ideas to keep in mind when creating and regulating this technology. Although there is a lot of bad news about technology, like the Cambridge Analytica scandal, in which a British political firm accessed personal data on up to 87 million Facebook users, we must not let fear guide the debate, said Vidhya Ramalingham, founder of counter-terrorism technology company Moonshot CVE.

“Policy should not be written out of fear, it should be written in an educated and informed manner,” she said.

掃碼打開財(cái)富Plus App