亚色在线观看_亚洲人成a片高清在线观看不卡_亚洲中文无码亚洲人成频_免费在线黄片,69精品视频九九精品视频,美女大黄三级,人人干人人g,全新av网站每日更新播放,亚洲三及片,wwww无码视频,亚洲中文字幕无码一区在线

立即打開
假指紋因人工智能越發(fā)猖獗,,你要當(dāng)心了

假指紋因人工智能越發(fā)猖獗,你要當(dāng)心了

Jonathan Vanian 2018年12月03日
依靠人工智能技術(shù)開發(fā)的偽造數(shù)字指紋可以騙過智能手機(jī)上的指紋識(shí)別器,。

一項(xiàng)新研究顯示,,依靠人工智能技術(shù)開發(fā)的偽造數(shù)字指紋可以騙過智能手機(jī)上的指紋識(shí)別器,黑客利用漏洞潛入受害者網(wǎng)上銀行偷錢的風(fēng)險(xiǎn)已然提升,。

最近,,紐約大學(xué)和密歇根州立大學(xué)的研究人員聯(lián)合發(fā)表了一篇論文,詳細(xì)介紹了如何使用深度學(xué)習(xí)技術(shù)削弱生物識(shí)別安全系統(tǒng)的防護(hù)功能,。該研究由美國(guó)國(guó)家科學(xué)基金會(huì)資助,,今年10月在一個(gè)生物識(shí)別和網(wǎng)絡(luò)安全論壇上榮獲最佳論文獎(jiǎng)。

蘋果和三星等智能手機(jī)制造商通常在手機(jī)中使用生物識(shí)別技術(shù),,這樣人們可以使用指紋輕松解鎖設(shè)備,,不用再輸入密碼。富國(guó)銀行之類的大銀行為了提升便利性,也越來越多地讓客戶使用指紋訪問賬戶,。

雖然指紋識(shí)別很方便,,但研究人員發(fā)現(xiàn)系統(tǒng)背后的軟件存在被騙過的可能性。這一發(fā)現(xiàn)非常重要,,因?yàn)榉缸锓肿右部赡芾妙I(lǐng)先的人工智能技術(shù)繞過傳統(tǒng)的網(wǎng)絡(luò)安全手段,。

最新發(fā)表的論文主要基于去年紐約大學(xué)和密歇根州立大學(xué)的同一撥研究人員的相關(guān)研究。之前發(fā)表的論文稱,,使用數(shù)字修改過的真實(shí)指紋或指紋部分圖像可以騙過一些指紋安全系統(tǒng),。他們將偽造指紋稱為“指紋大師”,可以騙過只辨別部分指紋圖像而不是完整指紋的生物安全系統(tǒng),。

具有諷刺意味的是,,人類看到指紋大師生成的指紋會(huì)立刻發(fā)現(xiàn)是假的,因?yàn)槎贾挥胁糠种讣y,。然而軟件卻識(shí)別不出來,。

新發(fā)表的論文里,研究人員使用數(shù)據(jù)訓(xùn)練的基礎(chǔ)軟件,,即神經(jīng)網(wǎng)絡(luò)生成看起來可信度非常高的數(shù)字指紋,,表現(xiàn)比之前研究使用的圖像還要好。偽造的指紋不僅看起來很真實(shí),,還帶有人眼無法察覺的隱藏屬性,,從而迷惑指紋識(shí)別器。?

Fake digital fingerprints created by artificial intelligence can fool fingerprint scanners on smartphones, according to new research, raising the risk of hackers using the vulnerability to steal from victims’ online bank accounts.

A recent paper by New York University and Michigan State University researchers detailed how deep learning technologies could be used to weaken biometric security systems. The research, supported by a United States National Science Foundation grant, won a best paper award at a conference on biometrics and cybersecurity in October.

Smartphone makers like Apple and Samsung typically use biometric technology in their phones so that people can use fingerprints to easily unlock their devices instead of entering a passcode. Hoping to add some of that convenience, major banks like Wells Fargo are increasingly letting customers access their checking accounts using their fingerprints.

But while fingerprint scanners may be convenient, researchers have found that the software that runs these systems can be fooled. The discovery is important because it underscores how criminals can potentially use cutting-edge AI technologies to do an end run around conventional cybersecurity.

The latest paper about the problem builds on previous research published last year by some of the same NYU and Michigan State researchers. The authors of that paper discovered that they could fool some fingerprint security systems by using either digitally modified or partial images of real fingerprints. These so-called MasterPrints could trick biometric security systems that only rely on verifying certain portions of a fingerprint image rather than the entire print.

One irony is that humans who inspect MasterPrints could immediately likely tell they were fake because they contained only partial fingerprints. Software, it turns out, could not.

In the new paper, the researchers used neural networks—the foundational software for data training—to create convincing looking digital fingerprints that performed even better than the images used in the earlier study. Not only did the fake fingerprints look real, they contained hidden properties undetectable by the human eye that could confuse some fingerprint scanners.

左側(cè)為真指紋的示例,,右側(cè)為人工智能生成的假指紋圖像,。

論文的作者之一是紐約大學(xué)計(jì)算機(jī)科學(xué)副教授朱利安·托格流斯,他表示團(tuán)隊(duì)使用改編的神經(jīng)網(wǎng)絡(luò)技術(shù),,即“生成對(duì)抗網(wǎng)絡(luò)”(GAN)生成假指紋,,取名為“深度指紋大師”,他說過去兩年里這一系列假指紋“橫掃了人工智能世界”,。

研究人員還可以使用GAN生成看似真實(shí),、其實(shí)虛假的照片和視頻,稱為“深度偽造”,,一些國(guó)會(huì)議員擔(dān)心,,可能有人用此類照片和視頻制作讓公眾信以為真的虛假視頻和宣傳。舉例來說,,一些研究人員介紹了如何利用人工智能技術(shù)來制作虛假的美國(guó)前總統(tǒng)巴拉克·奧巴馬的演講視頻,還有其他應(yīng)用方式,。

人工智能修改的照片也能騙過計(jì)算機(jī),去年麻省理工學(xué)院的研究人員介紹了案例,他們用一張烏龜圖像成功迷惑了谷歌的圖像識(shí)別軟件,。谷歌的圖像識(shí)別技術(shù)將烏龜錯(cuò)誤識(shí)別為步槍,因?yàn)闉觚攬D像中嵌有類似步槍圖片的隱藏元素,,人眼根本無法察覺。

研究人員可使用GAN結(jié)合兩種神經(jīng)網(wǎng)絡(luò),,協(xié)同工作生成嵌入神秘屬性,可以騙過圖像識(shí)別軟件的仿真圖像,。研究人員使用數(shù)千個(gè)公開的指紋圖像,訓(xùn)練神經(jīng)網(wǎng)絡(luò)識(shí)別真實(shí)指紋圖像,,同時(shí)訓(xùn)練另一個(gè)神經(jīng)網(wǎng)絡(luò)生成假指紋。

紐約大學(xué)計(jì)算機(jī)科學(xué)博士候選人菲利普·邦特拉格解釋說,,之后將第二個(gè)神經(jīng)網(wǎng)絡(luò)生成的假指紋圖像輸入第一個(gè)神經(jīng)網(wǎng)絡(luò),,測(cè)試是否成功。他也參與了撰寫該論文,。隨著時(shí)間推移,第二個(gè)神經(jīng)網(wǎng)絡(luò)學(xué)會(huì)生成逼真的指紋圖像,,騙過其他神經(jīng)網(wǎng)絡(luò)。

隨后研究人員用假指紋圖像測(cè)試Innovatrics和Neurotechnology等科技公司銷售的指紋掃描軟件,,檢測(cè)能否騙過,。每當(dāng)假指紋圖像成功騙過商業(yè)系統(tǒng)時(shí),研究人員就能改進(jìn)技術(shù)生成更逼真的假指紋,。

負(fù)責(zé)生成假圖像的神經(jīng)網(wǎng)絡(luò)嵌入了一組隨機(jī)的計(jì)算機(jī)代碼,,邦特拉格稱之為“噪聲數(shù)據(jù)”,這些數(shù)據(jù)可以欺騙指紋圖像識(shí)別軟件,。雖然研究人員能用所謂的進(jìn)化算法校正“噪聲數(shù)據(jù)”以迷惑指紋軟件,,但目前還不清楚此類代碼對(duì)圖像的影響,因?yàn)槿祟惪床怀鰜怼?/p>

可以肯定的是,,犯罪分子想破解指紋識(shí)別儀會(huì)面臨許多障礙,。邦特拉格解釋說,,許多指紋系統(tǒng)配有其他安全檢查手段,例如可檢測(cè)人類手指的熱傳感器,。

但新開發(fā)的深度指紋大師軟件起碼可證明,,人工智能技術(shù)可用于不良用途。網(wǎng)絡(luò)安全行業(yè),、銀行業(yè),、智能手機(jī)制造商和其他采用生物識(shí)別技術(shù)的公司要不斷改進(jìn)系統(tǒng),跟上人工智能的快速發(fā)展,。

托格流斯表示,,在該論文發(fā)表之前,,研究人員并沒有考慮人工智能生成的虛假圖像是否可能對(duì)生物識(shí)別系統(tǒng)構(gòu)成“嚴(yán)重威脅”。但他表示,,在論文發(fā)表后,,已經(jīng)有某些“大公司”聯(lián)系他,想深入了解虛假指紋可能存在的安全威脅,。

指紋傳感器軟件制造商N(yùn)eurotechnology的研發(fā)經(jīng)理胡斯塔斯·克蘭瑙斯卡斯博士通過電郵告訴《財(cái)富》雜志,,最近騙過指紋識(shí)別器的研究論文“觸及”了關(guān)鍵點(diǎn)。但他指出,,研究人員沒有考慮公司同時(shí)使用的其他安全手段,,而他認(rèn)為,其他安全手段可確?!皩?shí)際應(yīng)用中錯(cuò)誤識(shí)別幾率極低”,。

克蘭瑙斯卡斯還表示,Neurotechnology已建議企業(yè)客戶應(yīng)用指紋掃描軟件時(shí),,將安全級(jí)別設(shè)置為高于研究人員在論文中使用的安全級(jí)別,。

然而,,研究人員邦特拉杰指出,指紋安全級(jí)別越高,,使用起來越不方便,,因?yàn)楣就ǔ?huì)留出一些自由空間,不想讓客戶反復(fù)按手指實(shí)現(xiàn)準(zhǔn)確讀取,。

“很明顯,,如果將安全性設(shè)置調(diào)高,(欺騙攻擊)成功率會(huì)降低,,” 邦特拉杰表示,。 “但也不太方便,。”他補(bǔ)充道,。(財(cái)富中文網(wǎng))

譯者:Charlie

審校:夏林

Julian Togelius, one of the paper’s authors and an NYU associate computer science professor, said the team created the fake fingerprints, dubbed DeepMasterPrints, using a variant of neural network technology called “generative adversarial networks (GANs),” which he said “have taken the AI world by storm for the last two years.”

Researchers have used GANs to create convincing-looking but fabricated photos and videos known as “deep fakes,” which some lawmakers worry could be used to create fake videos and propaganda that the general public would think was true. For example, several researchers have described how they could use AI techniques to create fabricated videos of former President Barack Obama giving speeches that never took place, among other things.

AI-altered photos are also fooling computers, as MIT researchers showed last year when they created an image of a turtle that confused Google’s image-recognition software. The technology mistook the turtle for a rifle because it identified hidden elements embedded in the image that shared certain properties with an image of a gun, all of which were unnoticeable by the human eye.

With GANs, researchers typically use a combination of two neural networks that work together to create realistic images embedded with mysterious properties that can fool image-recognition software. Using thousands of publicly available fingerprint images, the researchers trained one neural network to recognize real fingerprint images, and trained the other to create its own fake fingerprints.

They then fed the second neural network’s fake fingerprint images into the first neural network to test how effective they were, explained Philip Bontrager, a NYU PhD candidate in computer science who also worked on the paper. Over time, the second neural network learned to generate realistic-looking fingerprint images that could trick the other neural network.

The researchers then fed the fake fingerprint images into fingerprint-scanning software sold by tech companies like Innovatrics and Neurotechnology to see if they could be fooled. Each time a fake fingerprint image tricked one of the commercial systems, the researchers were able to improve their technology to produce more convincing fakes.

The neural network responsible for creating the bogus images embeds a random set of computer code that Bontrager referred to as “noisy data” that can fool fingerprint image recognition software. Although the researchers were able to calibrate this “noisy data” to trip the fingerprint software using what’s known as an evolutionary algorithm, it’s unclear what this code does to the image, since humans are unable to see its impact.

To be sure, criminals face a number of obstacles cracking fingerprint scanners. For one, many fingerprint systems rely on other security checks like heat sensors that are used to detect human fingers, Bontrager explained.

But, these newly developed DeepMasterPrints show that AI technology can be used for nefarious purposes, which means that cybersecurity, banks, smartphone makers and other firms using biometric technology must constantly improve their systems to keep up with the rapid AI advances.

Togelius said that prior to the paper, researchers didn’t consider the possibility of AI-created fake images to be a “serious threat to biometric systems.” After its publication, he said unspecified “l(fā)arge companies” are contacting him to learn more about the possible security threats of fake fingerprints.

Dr. Justas Kranauskas, a research and development manager for Neurotechnology, the maker of fingerprint sensor software, told Fortune in an email that the recent research paper about fooling fingerprint readers “touched” on an important point. But he pointed out that his company uses other kinds of security that the researchers did not incorporate into their study that would, as he put it, ensure a “very low false acceptance risk in real applications.”

Kranauskas also said that the Neurotechnology recommends that its corporate customers set their fingerprint scanning software at a higher security level than the levels that researchers used in their paper.

Bontrager, the researcher, noted, however, that the higher the fingerprint security level, the less convenient it is for users, because companies typically want some leeway so that customers don’t have to repeatedly press their fingers on scanners to get accurate reads.

“So obviously, if you choose a high security setting, [spoofing attacks] are less successful,” Bontrager said. “But then it is less convenient,” he added.

掃碼打開財(cái)富Plus App