By+Tim+Wu


網絡水軍成千上萬,而如今,機器人也可以被用來當做網絡水軍,冒充真人刷五星好評、發布虛假評論、破壞網絡秩序。別有用心者借助社交媒體的威力發動“機器人大軍”,通過偽造的賬戶發布不實信息,嚴重威脅了民主投票、選舉和商業活動的正常進行。面對這樣一種“人機難辨”的威脅,我們該如何應對?
When science fiction writers first imagined robot invasions, the idea was that bots would become smart and powerful enough to take over the world by force, whether on their own or as directed by some evildoer. In reality, something only slightly less scary is happening. Robots are getting better, every day, at impersonating1 humans. When directed by opportunists, malefactors and sometimes even nation-states,2 they pose a particular threat to democratic societies, which are premised on being open to the people.
Robots posing as people have become a menace3. For popular Broadway shows (need we say Hamilton 4?), it is actually bots, not humans, who do much and maybe most of the ticket buying. Shows sell out immediately, and the middlemen (quite literally, evil robot masters) reap millions in ill-gotten gains.
Philip Howard, who runs the Computational Propaganda Research Project at Oxford, studied the deployment of propaganda bots during voting on Brexit5, and the recent American and French presidential elections. Twitter is particularly distorted by its millions of robot accounts; during the French election, it was principally Twitter robots who were trying to make#MacronLeaks into a scandal. Facebook has admitted it was essentially hacked during the American election in November last year. In Michigan, Mr. Howard notes,“junk news was shared just as widely as professional news in the days leading up to the election.”
Robots are also being used to attack the democratic features of the administrative state. This spring, the Federal Communications Commission put its proposed revocation of net neutrality up for public comment.6 In previous years such proceedings attracted millions of (human) commentators. This time, someone with an agenda but no actual public support unleashed robots who impersonated (via stolen identities) hundreds of thousands of people,7 flooding the system with fake comments against federal net neutrality rules.
To be sure, todays impersonation-bots are different from the robots imagined in science fiction: They arent sentient8, dont carry weapons and dont have physical bodies. Instead, fake humans just have whatever is necessary to make them seem human enough to“pass”: a name, perhaps a virtual appearance, a credit-card number and, if necessary, a profession, birthday and home address. They are brought to life by programs or scripts that give one person the power to imitate thousands.endprint

The problem is almost certain to get worse, spreading to even more areas of life as bots are trained to become better at mimicking humans. Given the degree to which product reviews have been swamped by robots (which tend to hand out five stars with abandon), commercial sabotage in the form of negative bot reviews is not hard to predict.9 In coming years, campaign finance limits will be (and maybe already are) evaded10 by robot armies posing as “small” donors. And actual voting is another obvious target—perhaps the ultimate target.
So far, weve been content to leave the problem to the tech industry, where the focus has been on building defenses, usually in the form of Captchas (“completely automated public Turing test to tell computers and humans apart”), those annoying “type this” tests to prove you are not a robot. But leaving it all to industry is not a long-term solution. For one thing, the defenses dont actually deter impersonation bots, but perversely reward whoever can beat them.11 And perhaps the greatest problem for a democracy is that companies like Facebook and Twitter lack a serious financial incentive to do anything about matters of public concern, like the millions of fake users who are corrupting the democratic process. Twitter estimates at least 27 million probably fake accounts; researchers suggest the real number is closer to 48 million, yet the company does little about the problem.
The problem is a public as well as private one, and impersonation robots should be considered what the law calls “hostis humani generis”: enemies of mankind, like pirates and other outlaws.12 That would allow for a better offensive strategy: bringing the power of the state to bear on13 the people deploying the robot armies to attack commerce or democracy.
The ideal anti-robot campaign would employ a mixed technological and legal approach. Improved robot detection might help us find the robot masters or potentially help national security unleash counterattacks, which can be necessary when attacks come from overseas. There may be room for deputizing14 private parties to hunt down bad robots. A simple legal remedy would be a “Blade Runner” law that makes it illegal to deploy any program that hides its real identity to pose as a human. Automated processes should be required to state, “I am a robot.” When dealing with a fake human, it would be nice to know.endprint
Using robots to fake support, steal tickets or crash democracy really is the kind of evil that science fiction writers were warning about. The use of robots takes advantage of the fact that political campaigns, elections and even open markets make humanistic assumptions, trusting that there is wisdom or at least legitimacy15 in crowds and value in public debate. But when support and opinion can be manufactured, bad or unpopular arguments can win not by logic but by a novel,16 dangerous form of force—the ultimate threat to every democracy.
當科幻小說家第一次想象機器人入侵的時候,他們的想法是機器人會變得足夠聰明強大,以至于可以用武力占領世界,不管他們是主動地還是受到壞人的指使。事實上,正在發生的事情比這好不到哪里去。機器人正變得越來越擅長模仿人類。當受到投機者、犯罪分子,有時甚至是民族國家操縱的時候,機器人會對以向人民公開為前提的民主社會構成一種特殊的威脅。
假裝成人類的機器人已經成為了一種威脅。對于火爆的百老匯演出(我們還需要明說這指的是《漢密爾頓》嗎?)來說,實際上是機器人——而非人類——買下了許多,也可能是大部分的門票。演出門票當即銷售一空,倒票的中間商們(確切地說是邪惡的機器人操縱者們)便賺了幾百萬的不義之財。
菲利普·霍華德是牛津大學計算機政治宣傳研究項目的負責人,他對英國脫歐投票以及最近一次的美國和法國總統大選期間宣傳機器人的使用進行了研究。幾百萬的機器人賬戶使得推特上的信息變得非常失真;在法國大選期間,正是推特上的機器人賬戶試圖讓“馬克龍郵件泄露事件”升級為一樁丑聞。臉書已經承認,他們曾在去年11月份的美國大選期間曾被黑客攻擊。霍華德先生說,在密歇根州,“在總統選舉的前幾天,垃圾新聞像專業新聞一樣被廣泛傳播。”
機器人也正被用來破壞行政國家的民主特征。今年春天,美國聯邦通信委員會將“廢除網絡中立原則”的提案發布到網上供大眾討論。在以前,這樣的提案討論程序吸引了數以百萬計的(真人)評論者的參與。但這一次,別有意圖卻沒有獲得真正公眾支持的人,通過竊取身份的方式,發動了機器人來冒充成千上萬的真人評論者,使得評論系統里充滿了反對聯邦網絡中立原則的虛假評論。
毫無疑問的是,如今冒充真人網民的機器人與科幻小說中假想的機器人是不同的:他們沒有感知,沒有攜帶武器,也沒有實體。相反,這些冒充人類的機器人只要與人類足夠相似就可以“通過”驗證:一個名字,或許還要一個虛擬的形象,一個信用卡號碼,如果有必要的話,還需要一個職業、生日以及家庭住址。程序或腳本給了這些機器人生命,任何人都可以通過這些程序和腳本來獲得冒充成千上萬網民的能力。
由于機器人正被訓練得越來越擅長模仿人類,這個問題幾乎肯定會變得更加嚴重,甚至還會擴散到生活中的更多領域。考慮到產品評論欄被機器人評論淹沒的程度(無節制地刷五星),由機器人填寫的負面評價所導致的商業破壞也并不難預測。在未來的幾年里,競選活動資金的限制將會被(或者可能已經被)偽裝成“小額”捐贈者的機器人大軍所規避。實質性的投票是機器人的另一個明確目標——或許是它的終極目標。
迄今為止,我們已經滿足于將這個問題留給科技行業。他們的關注點是建立防御措施,通常是以“驗證碼”(“全自動區分計算機和人類的公開圖靈測試”)的形式,用那些讓人討厭的“請填寫以下內容”來證明你不是機器人。但是將這一問題全部交給科技行業并不是一個長遠的解決方案。一方面,這些防御體系并不能真正地阻止偽裝成真人用戶的機器人,反而會事與愿違地獎勵那些通過測試的人。另一方面,或許對一個民主國家來說,最大的問題在于,像臉書和推特這樣的公司缺乏實在的、經濟上的動機來解決諸如數以百萬的虛假用戶破壞民主進程這些涉及公共利益的問題。 據推特估計,至少有兩千七百萬疑似假賬戶;研究者們估測真實數字接近四千八百萬,但是推特公司對該問題幾乎不采取任何措施。
這個問題既是一個公共問題,也是一個私人問題,用來冒充人類的機器人應被視為法律上的“人類的敵人”,就像海盜以及其他罪犯一樣。這一點使得我們可以采用一個更具攻擊性的策略:借用國家力量去對付那些使用機器人大軍來破壞商業或者民主的人。
理想的反機器人運動將采取科技與法律相結合的手段。改良后的機器人檢測系統能夠幫助我們找到機器人的指使者,或者能夠幫助國家安全部門發動反擊,這在應對來自海外的機器人襲擊時尤其必要。委托私人機構來追捕惡意機器人也是有可行空間的。一個簡單的法律補救措施是運用名為“刀鋒戰士”的法規,它規定使用任何隱藏機器人真實身份冒充人類的程序都是違法的。自動化的程序需表明“我是個機器人”。當和虛假機器人打交道時,知道其真實身份會更加得心應手。
使用機器人來偽造支持,竊取選票或者破壞民主,也正是科幻作家過去向人們所警告的一種惡行。用機器人冒充人類利用了政治運動、選舉甚至是開放市場都基于人本主義假設的這一弱點,相信群眾是有智慧的或至少是有正當性的,并且公眾辯論是有價值的。但當支持率和觀點可以被偽造的時候,壞的、或是不得人心的觀點也可以不通過邏輯而是通過一種新奇而又危險的力量取勝——這種力量是對每個民主國家的終極威脅。
1. impersonate: 冒充,假扮。
2. opportunist: 投機者;malefactor:犯罪分子,作惡者;nation-state:民族國家,一種國家的形式與意識形態,不只是一個政治及地理的單一實體,在文化與族群上也是一個完整共同體。
3. menace: 危險事物,威脅。
4. Hamilton: 《漢密爾頓》,根據美國開國元勛亞歷山大·漢密爾頓生平故事改編的百老匯音樂劇,上演后極度火爆,創下了百老匯音樂劇史上票房紀錄。
5. Brexit: 英國脫歐(即Britain+exit)。
6. revocation:(對法律等的)廢除,撤銷;net neutrality: 網絡中立原則,即要求互聯網服務供應商及政府應平等處理所有互聯網上的資料,不差別對待或依不同用戶、內容、網站、平臺、應用、接取裝置類型或通訊模式而差別收費。
7. agenda: 秘密計劃,秘密目標;unleash: 發動,釋放。
8. sentient: // 有感知力的,有知覺的。
9. swamp: 淹沒;with abandon: 恣意地,放縱地;sabotage: //人為破壞,蓄意妨礙。
10. evade: 規避,逃避。
11. deter: 制止,阻止;perversely: 反常地,事與愿違地。
12. hostis humani generis: 〈拉丁〉人類的敵人;outlaw: 不法之徒。
13. bear on: 對……施加壓力。
14. deputize: 授權……為代表,委托……為代理。
15. legitimacy: 正當性,合法性。
16. manufacture: 捏造,虛構;novel: 新奇的,不同尋常的。endprint