亚色在线观看_亚洲人成a片高清在线观看不卡_亚洲中文无码亚洲人成频_免费在线黄片,69精品视频九九精品视频,美女大黄三级,人人干人人g,全新av网站每日更新播放,亚洲三及片,wwww无码视频,亚洲中文字幕无码一区在线

立即打開
人工智能容易被騙,,分不清海龜和來(lái)復(fù)槍

人工智能容易被騙,,分不清海龜和來(lái)復(fù)槍

Jonathan Vanian 2017年11月13日
當(dāng)人工智能順利工作時(shí),計(jì)算機(jī)可以迅速識(shí)別圖片中的貓,。但是當(dāng)它出錯(cuò)時(shí),,連海龜和來(lái)復(fù)槍的圖片都能搞錯(cuò)。

麻省理工學(xué)院(MIT)計(jì)算機(jī)科學(xué)和人工智能實(shí)驗(yàn)室的研究人員找到了辦法來(lái)欺騙自動(dòng)識(shí)別圖片中物體的谷歌(Google)軟件。他們創(chuàng)造了一個(gè)算法,,略微改動(dòng)了海龜?shù)膱D片,,就可以讓谷歌的識(shí)圖軟件將它視作一把來(lái)復(fù)槍。特別值得一提的是,,麻省理工學(xué)院的團(tuán)隊(duì)3D打印了這只海龜后,,谷歌的軟件依舊認(rèn)為它是一把武器,而不是一只爬行動(dòng)物,。

這樣的混淆,,意味著罪犯最終也可能利用到識(shí)圖軟件的缺陷。隨著這類軟件越來(lái)越滲透到人們的日常生活之中,,情況會(huì)更為凸顯,。由于科技公司和他們的客戶日益依賴人工智能來(lái)處理重要工作,他們必須考慮這個(gè)問(wèn)題。

例如,,機(jī)場(chǎng)掃描設(shè)備可能有一天會(huì)采用識(shí)別技術(shù),,自動(dòng)探測(cè)旅客行李中的武器。不過(guò)罪犯可能會(huì)試圖改造炸彈等危險(xiǎn)品,,欺騙探測(cè)器而讓它們無(wú)法被檢測(cè)到,。

麻省理工學(xué)院的研究者、計(jì)算機(jī)科學(xué)博士生,、實(shí)驗(yàn)的共同領(lǐng)導(dǎo)者阿尼什?阿塔耶解釋道,,麻省理工學(xué)院的研究人員對(duì)海龜圖像所做的一切改變,都是人眼無(wú)法識(shí)別的,。

在起初的海龜圖片測(cè)試后,,研究人員把這只爬行動(dòng)物重制成了一個(gè)物體,看看修改后的形象能否繼續(xù)欺騙谷歌的計(jì)算機(jī),。隨后,,他們對(duì)3D打印的海龜進(jìn)行了攝影和錄像,并將數(shù)據(jù)輸入谷歌的識(shí)圖軟件,。

果然,,谷歌的軟件認(rèn)為這些海龜就是來(lái)復(fù)槍。

麻省理工學(xué)院上周發(fā)表了一篇關(guān)于本實(shí)驗(yàn)的學(xué)術(shù)論文,。這篇論文以之前幾次測(cè)試人工智能的研究為基礎(chǔ),作者已經(jīng)將其提交,,供即將舉辦的人工智能會(huì)議作進(jìn)一步審閱,。

能夠自動(dòng)識(shí)別圖中物體的計(jì)算機(jī),都依賴于神經(jīng)網(wǎng)絡(luò),,這種軟件會(huì)大致模仿人類大腦學(xué)習(xí)的方式,。如果研究人員給神經(jīng)網(wǎng)絡(luò)提供了足夠的貓類圖片,它們就能識(shí)別這些圖片的模式,,最終在沒(méi)有人類幫助的情況下認(rèn)出圖片中的貓類,。

不過(guò),如果這些神經(jīng)網(wǎng)絡(luò)學(xué)習(xí)的圖片照明效果不好或是物體被遮擋,,有時(shí)就會(huì)犯錯(cuò),。阿塔耶解釋道,神經(jīng)網(wǎng)絡(luò)的工作方式仍然有些難以理解,,研究人員還不清楚它們?yōu)槭裁纯梢曰驘o(wú)法準(zhǔn)確識(shí)別某物,。

麻省理工學(xué)院團(tuán)隊(duì)的算法創(chuàng)造了所謂的對(duì)抗樣本,它們本質(zhì)上是計(jì)算機(jī)修改的圖片,,專用于迷惑識(shí)別物體的軟件,。阿塔耶表示,盡管海龜?shù)膱D像在人類眼里是一只爬行動(dòng)物,,但算法改變了圖片,,讓它與來(lái)復(fù)槍的圖像共享了某些未知的特征,。這種算法還會(huì)考慮照明效果不加或色彩變換的情況,從而會(huì)導(dǎo)致谷歌的軟件識(shí)別失敗,。在3D打印之后,,谷歌軟件仍然識(shí)別錯(cuò)誤,表明算法產(chǎn)生的對(duì)抗特性在物質(zhì)世界依舊存在,。

阿塔耶表示,,盡管論文重點(diǎn)討論了谷歌的人工智能軟件,但微軟(Microsoft)和牛津大學(xué)(University of Oxford)開發(fā)的類似識(shí)圖軟件也會(huì)出錯(cuò),。他推測(cè),,由Facebook和亞馬遜(Amazon)等公司開發(fā)的其他大多數(shù)識(shí)圖軟件也很可能失誤,因?yàn)樗鼈兊臋C(jī)制大體相同,。

阿塔耶解釋道,,除了機(jī)場(chǎng)掃描儀之外,依賴深度學(xué)習(xí)技術(shù)識(shí)別特定圖像的家庭安全系統(tǒng)也可能被欺騙,。

想象一下,,假如越來(lái)越多的攝像頭只在注意到物體運(yùn)動(dòng)時(shí)才開始錄像。那么為了避免被過(guò)路汽車之類的無(wú)害行為干擾,,攝像頭可能會(huì)接受訓(xùn)練,,忽視那些汽車。而利用這一點(diǎn),,罪犯就可以穿著專門設(shè)計(jì)的T恤,,讓計(jì)算機(jī)誤以為它們只是看到了卡車,而不是人,。果真如此的話,,竊賊就能輕易通過(guò)安全系統(tǒng)。

阿塔耶承認(rèn),,這些當(dāng)然都只是推測(cè),。不過(guò)考慮到黑客事件頻繁出現(xiàn),這樣的情形值得深思,。阿塔耶表示,,他希望測(cè)試自己的想法,并最終制造出有能力“迷惑安全攝像頭”的“對(duì)抗T恤”,。

谷歌和Facebook等其他公司意識(shí)到,,黑客正在試圖欺騙他們的系統(tǒng)。多年來(lái),,谷歌都在研究阿塔耶和他的麻省理工學(xué)院團(tuán)隊(duì)制造的這類威脅,。谷歌的一位發(fā)言人拒絕就麻省理工學(xué)院的項(xiàng)目發(fā)表評(píng)論,不過(guò)他指出,谷歌最近的兩篇論文體現(xiàn)了公司在應(yīng)對(duì)對(duì)抗技術(shù)上的工作,。

阿塔耶表示:“有許多聰明人都在努力工作,,讓(類似谷歌軟件這樣的)分類器更加完善?!?(財(cái)富中文網(wǎng))

譯者:嚴(yán)匡正

Researchers from MIT’s computer science and artificial intelligence laboratory have discovered how to trick Google’s (GOOG, +0.66%)software that automatically recognizes objects in images. They created an algorithm that subtly modified a photo of a turtle so that Google’s image-recognition software thought it was a rifle. What’s especially noteworthy is that when the MIT team created a 3D printout of the turtle, Google’s software still thought it was a weapon rather than a reptile.

The confusion highlights how criminals could eventually exploit image-detecting software, especially as it becomes more ubiquitous in everyday life. Technology companies and their clients will have to consider the problem as they increasingly rely on artificial intelligence to handle vital jobs.

For example, airport scanning equipment could one day be built with technology that automatically identifies weapons in passenger luggage. But criminals could try to fool the detectors by modifying dangerous items like bombs so they are undetectable.

All the changes the MIT researchers made to the turtle image were unrecognizable to the human eye, explained Anish Athalye, an MIT researcher and PHD candidate in computer science who co-led the experiment.

After the original turtle image test, the researchers reproduced the reptile as a physical object to see if the modified image would still trick Google’s computers. The researchers then took photos and video of the 3-D printed turtle, and fed that data into Google’s image-recognition software.

Sure enough, Google’s software thought the turtles were rifles.

MIT publicized an academic paper about the experiment last week. The authors are submitting the paper, which builds on previous studies testing artificial intelligence, for further review at an upcoming AI conference.

Computers designed to automatically spot objects in images are based on neural networks, software that loosely imitates how the human brain learns. If researchers feed enough images of cats into these neural networks, they learn to recognize patterns in those images so they can eventually spot felines in photos without human help.

But these neural networks can sometimes stumble if they are fed certain types of pictures with bad lighting and obstructed objects. The way these neural networks work is still somewhat mysterious, Athalye explained, and researchers still don’t know why they may or may not accurately recognize something.

The MIT team’s algorithm created what are known as adversarial examples, essentially computer-manipulated images that were crafted to fool software that recognize objects. While the turtle image may resemble a reptile to humans, the algorithm morphed it so that it shares unknown characteristics with an image of a rifle. The algorithm also took in account conditions like poor lighting or miscoloration that could have caused Google’s image-recognition software to misfire, Athalye said. The fact that Google’s software still mislabeled the turtle after it was 3D printed shows that the adversarial qualities embedded from the algorithm are still retained in the physical world.

Although the research paper focuses on Google’s AI software, Athalye said that similar image-recognition tools from Microsoft(MSFT, +0.46%) and the University of Oxford also stumbled. Most other image-recognition software from companies like Facebook (FB, -0.40%) and Amazon (AMZN, +0.86%) would also likely blunder, he speculates, because of their similarities.

In addition to airport scanners, home security systems that rely on deep learning to recognize certain images may also be vulnerable to being fooled, Athalye explained.

Consider cameras that are increasingly set up to only record when they notice movement. To avoid being tripped by innocuous activity like cars driving by, cameras could be trained to ignore automobiles. To take advantage, however, criminals could wear t-shirts that have been specially designed to fool computers into thinking they see trucks instead of people. If so, burglars could easily bypass the security system.

Of course, this is all speculation, Athalye concedes. But, considering the frequency of hacking, it’s something worth considering. Athalye said he wants to test his idea and eventually make “adversarial t-shirts” that have the ability to “mess up a security camera.”

Google and other companies like Facebook are aware that hackers are trying to figure out ways to spoof their systems. For years, Google has been studying the kind of threats that Athalye and his MIT team produced. A Google spokesperson declined to comment on the MIT project, but pointed to two recent Google research papersthat highlight the company’s work on combating the adversarial techniques.

“There are a lot of smart people working hard to make classifiers [like Google’s software] more robust,” Athalye said.

掃碼打開財(cái)富Plus App