999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

人工智能生成的面孔更可信嗎?

2023-04-06 02:29:36埃米莉威林厄姆陳先宇
英語世界 2023年1期
關(guān)鍵詞:深度研究

文/埃米莉·威林厄姆 譯/陳先宇

When TikTok videos emerged in 2021 that seemed to show “Tom Cruise” making a coin disappear and enjoying a lollipop, the account name was the only obvious clue that this wasn’t the real deal.The creator of the“deeptomcruise” account on the social media platform was using “deepfake”technology to show a machine-generated version of the famous actor performing magic tricks and having a solo danceoff.

2One tell for a deepfake used to be the “uncanny valley” effect, an unsettling feeling triggered by the hollow look in a synthetic1synthetic 合成的,人造的。person’s eyes.But increasingly convincing images are pulling viewers out of the valley and into the world of deception promulgated by deepfakes.

3The startling realism has implications for malevolent2malevolent 惡毒的。uses of the technology: its potential weaponization in disinformation campaigns for political or other gain, the creation of false porn for blackmail, and any number of intricate manipulations for novel forms of abuse and fraud.Developing countermeasures to identify deepfakes has turned into an“arms race” between security sleuths on one side and cybercriminals and cyberwarfare operatives on the other.

2021 年,抖音海外版上出現(xiàn)了幾個(gè)“湯姆·克魯斯”的視頻。視頻中的“湯姆·克魯斯”或表演硬幣消失魔術(shù),或在吃棒棒糖,只有賬號(hào)名能清楚地表明視頻內(nèi)容并不真實(shí)。在抖音海外版上創(chuàng)建“深度湯姆克魯斯”賬號(hào)的人正是使用了“深度偽造”技術(shù),通過機(jī)器生成知名影星湯姆·克魯斯的圖像,讓其表演魔術(shù)和獨(dú)舞。

2以往辨別深度偽造的要素是“恐怖谷”效應(yīng),即人們看到合成人空洞漠然的眼神時(shí)會(huì)感到不安。但日趨逼真的圖像正將觀者拉出深谷,帶入深度偽造所宣揚(yáng)的欺騙世界。

3深度偽造技術(shù)能達(dá)到的真實(shí)程度讓人吃驚,這意味著存在惡意使用該技術(shù)的可能:用作虛假宣傳的武器,以獲取政治或其他方面的利益;用于制造虛假色情內(nèi)容進(jìn)行敲詐;通過一些復(fù)雜操作,實(shí)施新型虐待和詐騙。開發(fā)識(shí)別深度偽造的反制手段已演變?yōu)橐粓?chǎng)“軍備競(jìng)賽”,競(jìng)賽一方是安全“偵探”,另一方則是網(wǎng)絡(luò)罪犯和網(wǎng)戰(zhàn)特工。

4A new study published in the Proceedings of the National Academy of Sciences of the United States of America provides a measure of how far the technology has progressed.The results suggest that real humans can easily fall for machine-generated faces—and even interpret them as more trustworthy than the genuine article.“We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” says study co-author Hany Farid,a professor at the University of California, Berkeley.The result raises concerns that “these faces could be highly effective when used for nefarious3nefarious 邪惡的,不道德的。purposes.”

5“We have indeed entered the world of dangerous deepfakes,” says Piotr Didyk, an associate professor at the University of Italian Switzerland in Lugano, who was not involved in the paper.The tools used to generate the study’s still images are already generally accessible.And although creating equally sophisticated video is more challenging,tools for it will probably soon be within general reach, Didyk contends.

4《美國(guó)國(guó)家科學(xué)院院刊》上發(fā)表了一份新的研究報(bào)告,該報(bào)告對(duì)深度偽造技術(shù)的發(fā)展程度進(jìn)行了評(píng)估。研究結(jié)果表明,真人易為機(jī)器生成的面孔所騙,甚至認(rèn)為其比真實(shí)人臉更可信。報(bào)告合著者、加利福尼亞大學(xué)伯克利分校教授哈尼·法里德說:“我們發(fā)現(xiàn),合成人臉不僅非常逼真,而且被認(rèn)為比真實(shí)人臉更可信。”這一結(jié)果引發(fā)了人們的擔(dān)憂——“使用合成人臉行不法之事可能很有效果”。

5“我們確實(shí)已進(jìn)入危險(xiǎn)的深度偽造世界。”未參與上述研究的瑞士意大利語區(qū)大學(xué)(位于盧加諾)副教授彼得·迪迪克如此說道。研究所用生成靜態(tài)圖像的工具已普及。迪迪克認(rèn)為,盡管同樣復(fù)雜的視頻較難制作,但公眾也許很快就能用上相關(guān)工具。

6這項(xiàng)研究使用的合成人臉是在兩個(gè)神經(jīng)網(wǎng)絡(luò)反復(fù)交互往來的過程中生成的。這兩個(gè)網(wǎng)絡(luò)是典型的生成對(duì)抗網(wǎng)絡(luò)。其中一個(gè)名為生成器,生成一系列不斷演變的合成人臉,就像一名學(xué)生逐步完成草圖一樣。另一個(gè)名為鑒別器,對(duì)真實(shí)圖像進(jìn)行學(xué)習(xí)后,通過比對(duì)真實(shí)人臉的數(shù)據(jù),評(píng)定生成器輸出的圖像。

6The synthetic faces for this study were developed in back-and-forth interactions between two neural networks,examples of a type known as generative adversarial networks.One of the networks, called a generator, produced an evolving series of synthetic faces like a student working progressively through rough drafts.The other network, known as a discriminator, trained on real images and then graded the generated output by comparing it with data on actual faces.

7The generator began the exercise with random pixels.With feedback from the discriminator, it gradually produced increasingly realistic humanlike faces.Ultimately, the discriminator was unable to distinguish a real face from a fake one.

8The networks trained on an array of real images representing Black, East Asian, South Asian and white faces of both men and women, in contrast with the more common use of white men’s faces in earlier research.

9After compiling 400 real faces matched to 400 synthetic versions, the researchers asked 315 people to distinguish real from fake among a selection of 128 of the images.Another group of 219 participants got some training and feedback about how to spot fakes as they tried to distinguish the faces.Finally,a third group of 223 participants each rated a selection of 128 of the images for trustworthiness on a scale of one (very untrustworthy) to seven (very trustworthy).

7生成器從隨機(jī)像素開始訓(xùn)練。得益于鑒別器的反饋,生成器生成的人臉越來越逼真,直至鑒別器無法區(qū)分真假面孔。

8與早期研究更常用白人男性面孔不同,兩個(gè)神經(jīng)網(wǎng)絡(luò)的訓(xùn)練素材是大量再現(xiàn)黑人、東亞人、南亞人以及白人男女面孔的真實(shí)圖像。

9研究人員先匯集了400 張真實(shí)人臉及與之匹配的400 張合成人臉,然后從中選擇128張,要求315 名受試者辨別真假。另一組219 名受試者在進(jìn)行辨別時(shí)獲得了一定的培訓(xùn)和反饋,其內(nèi)容涉及如何識(shí)別出假面孔。第三組223 名受試者對(duì)選出的128 張圖像進(jìn)行可信度評(píng)分,評(píng)分范圍為1(非常不可信)到7(非常可信)。

10第一組受試者辨別真假人臉完全靠猜,平均準(zhǔn)確率為48.2%。第二組的準(zhǔn)確率也沒高多少,僅為59%左右,即便他們了解到那些受試者選擇的反饋信息也沒有用。在第三組的可信度評(píng)分中,合成人臉的平均得分略高,為4.82,而真實(shí)人臉的平均得分為4.48。

10The first group did not do better than a coin toss4coin toss 擲硬幣,此處引申為“沒把握,碰運(yùn)氣”。at telling real faces from fake ones, with an average accuracy of 48.2 percent.The second group failed to show dramatic improvement,receiving only about 59 percent, even with feedback about those participants’choices.The group rating trustworthiness gave the synthetic faces a slightly higher average rating of 4.82, compared with 4.48 for real people.

11The researchers were not expecting these results.“We initially thought that the synthetic faces would be less trustworthy than the real faces,” says study co-author Sophie Nightingale.

12The uncanny valley idea is not completely retired.Study participants did overwhelmingly identify some of the fakes as fake.“We’re not saying that every single image generated is indistinguishable from a real face, but a significant number of them are,” Nightingale says.

11上述結(jié)果出乎研究人員的預(yù)料。“我們最初認(rèn)為合成人臉的可信度要比真實(shí)人臉低。”論文合著者索菲·奈廷格爾如是說。

12恐怖谷效應(yīng)并沒有完全退去。絕大多數(shù)受試者都認(rèn)為其中一些合成人臉是假的。奈廷格爾說:“我們并不是說,生成的每張圖像都難以與真實(shí)人臉區(qū)分開來,但其中相當(dāng)一部分確實(shí)如此。”

13這一發(fā)現(xiàn)增加了人們對(duì)技術(shù)可及性的擔(dān)憂,因?yàn)橛辛嗽摷夹g(shù),幾乎人人都可創(chuàng)建欺騙性的靜態(tài)圖像。奈廷格爾說:“一個(gè)人即使沒有Photoshop 或CGI 的專業(yè)知識(shí),也能創(chuàng)建合成內(nèi)容。”南加利福尼亞大學(xué)視覺智能與多媒體分析實(shí)驗(yàn)室創(chuàng)始負(fù)責(zé)人瓦埃勒·阿布德-阿爾馬吉德沒有參與上述研究,但他表示:另一個(gè)擔(dān)憂是研究結(jié)果會(huì)給人留下一種印象,即深度偽造將完全無法檢測(cè)出來。阿布德-阿爾馬吉德?lián)模茖W(xué)家可能會(huì)放棄開發(fā)針對(duì)深度偽造的對(duì)策,盡管他認(rèn)為保持檢測(cè)技術(shù)與深度偽造不斷提高的真實(shí)性同步發(fā)展,“不過是又一個(gè)取證問題”。

13The finding adds to concerns about the accessibility of technology that makes it possible for just about anyone to create deceptive still images.“Anyone can create synthetic content without specialized knowledge of Photoshop or CGI5= computer-generated imagery計(jì)算機(jī)生成圖像。,” Nightingale says.Another concern is that such findings will create the impression that deepfakes will become completely undetectable, says Wael Abd-Almageed, founding director of the Visual Intelligence and Multimedia Analytics Laboratory at the University of Southern California, who was not involved in the study.He worries scientists might give up on trying to develop countermeasures to deepfakes, although he views keeping their detection on pace with their increasing realism as “simply yet another forensics6forensics 取證。problem.”

14“The conversation that’s not happening enough in this research community is how to start proactively to improve these detection tools,” says Sam Gregory, director of programs strategy and innovation at WITNESS7自1992 年以來,WITNESS 一直致力于讓世界各地的任何人都能利用視頻和技術(shù)的力量來爭(zhēng)取人權(quán)。該組織通過向數(shù)百萬人提供相關(guān)培訓(xùn)、支持和工具,動(dòng)員有能力改變世界的21 世紀(jì)新一代維權(quán)人士積極參與。WITNESS 憑借其龐大的全球合作伙伴網(wǎng)絡(luò),幫助受害者公開反對(duì)強(qiáng)權(quán)和勇敢面對(duì)不公正待遇。, a human rights organization that in part focuses on ways to distinguish deepfakes.Making tools for detection is important because people tend to overestimate their ability to spot fakes, he says, and “the public always has to understand when they’re being used maliciously.”

14WITNESS 是一家人權(quán)組織,其重點(diǎn)關(guān)注的領(lǐng)域之一便是深度偽造檢測(cè)方法。該組織的項(xiàng)目戰(zhàn)略與創(chuàng)新主管薩姆·格雷戈里說:“研究界還沒有充分討論如何積極主動(dòng)地開始改進(jìn)檢測(cè)工具。”他還表示,開發(fā)檢測(cè)工具非常重要,因?yàn)槿藗兺鶗?huì)高估自己識(shí)別假貨的能力,而“公眾永遠(yuǎn)都必須了解自己何時(shí)被惡意利用了”。

15Gregory, who was not involved in the study, points out that its authors directly address these issues.They highlight three possible solutions, including creating durable watermarks for these generated images, “l(fā)ike embedding fingerprints so you can see that it came from a generative process,” he says.

15格雷戈里沒有參與上述研究,但他指出研究報(bào)告的作者直接探討了相關(guān)問題。他們重點(diǎn)提出了3 種可行的解決方案,包括在生成的圖像上添加持久水印,“像嵌入指紋一樣,這樣你就可以看出它是人工合成而來的。”格雷戈里說。

16The authors of the study end with a stark conclusion after emphasizing that deceptive uses of deepfakes will continue to pose a threat: “We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits,”they write.“If so, then we discourage the development of technology simply because it is possible.” ■

16報(bào)告作者強(qiáng)調(diào),利用深度偽造行騙將繼續(xù)構(gòu)成威脅,最后他們得出立場(chǎng)明確的結(jié)論:“因此,我們敦促技術(shù)開發(fā)者考慮相關(guān)風(fēng)險(xiǎn)是否大于收益。”他們寫道,“如果是,那我們就不鼓勵(lì)該技術(shù)的發(fā)展,只因其確實(shí)可能弊大于利。” □

猜你喜歡
深度研究
FMS與YBT相關(guān)性的實(shí)證研究
2020年國(guó)內(nèi)翻譯研究述評(píng)
遼代千人邑研究述論
深度理解一元一次方程
視錯(cuò)覺在平面設(shè)計(jì)中的應(yīng)用與研究
科技傳播(2019年22期)2020-01-14 03:06:54
EMA伺服控制系統(tǒng)研究
深度觀察
深度觀察
深度觀察
深度觀察
主站蜘蛛池模板: 国产女人18毛片水真多1| 国产一级做美女做受视频| 992tv国产人成在线观看| 尤物亚洲最大AV无码网站| 99精品国产高清一区二区| 欧美亚洲另类在线观看| 欧洲成人在线观看| 国产永久免费视频m3u8| 亚洲人成高清| 在线精品视频成人网| 国产高清色视频免费看的网址| 自拍亚洲欧美精品| 久久99蜜桃精品久久久久小说| 国产精品视频a| 色欲不卡无码一区二区| 97人人模人人爽人人喊小说| 一级一毛片a级毛片| 在线看片国产| 伊人查蕉在线观看国产精品| 日韩a级片视频| 精品无码专区亚洲| 成人a免费α片在线视频网站| 福利在线一区| 一级毛片不卡片免费观看| 97无码免费人妻超级碰碰碰| 狠狠做深爱婷婷综合一区| 99青青青精品视频在线| 欧美午夜小视频| 五月婷婷中文字幕| 国产在线98福利播放视频免费| 精品一区二区三区水蜜桃| 亚欧乱色视频网站大全| 亚瑟天堂久久一区二区影院| 国产午夜人做人免费视频| 国产第一页亚洲| 日本福利视频网站| 亚洲日韩高清在线亚洲专区| 亚洲精品人成网线在线| 又黄又湿又爽的视频| 成年人久久黄色网站| 免费午夜无码18禁无码影院| 92精品国产自产在线观看| 亚洲91在线精品| 亚洲大尺码专区影院| 欧美一级在线播放| 久久精品丝袜高跟鞋| 亚洲国产亚综合在线区| 666精品国产精品亚洲| 中文字幕无码电影| 色吊丝av中文字幕| 国产毛片不卡| av在线人妻熟妇| 国产亚洲第一页| 国产成人做受免费视频| 国产大片黄在线观看| 一级片一区| 国产精品人人做人人爽人人添| 四虎国产精品永久一区| 无码福利视频| 91视频精品| 青青青亚洲精品国产| 久青草免费在线视频| 国产尤物视频网址导航| 99偷拍视频精品一区二区| 国产成人av一区二区三区| 日韩在线永久免费播放| 精品无码一区二区在线观看| 亚洲色图欧美一区| 亚洲天堂首页| 日本在线亚洲| 成人在线天堂| 无码精品福利一区二区三区| 精品超清无码视频在线观看| 九九热精品在线视频| 国产无码精品在线播放| 秘书高跟黑色丝袜国产91在线| 成人亚洲天堂| 亚洲二三区| 黄色福利在线| 91精品国产91久久久久久三级| 狠狠v日韩v欧美v| 久久久国产精品无码专区|