文/茱莉亞·博斯曼 譯/周臻
By Julia Bossmann
Optimizing logistics, detecting fraud, composing art, conducting research, providing translations:intelligent machine systems are transforming our lives for the better. As these systems become more capable,our world becomes more efficient and consequently richer.
[2] Tech giants such as Alphabet1Alphabet公司(Alphabet Inc.)是一家設在美國加州的控股公司。公司前身為谷歌。公司重整后,谷歌成為其最大子公司。,Amazon, Facebook, IBM and Microsoft—as well as individuals like Stephen Hawking and Elon Musk2埃隆·馬斯克為SpaceX 的CEO 和首席設計師,以聯合創辦了特斯拉汽車和PayPal而聞名。—believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence. In many ways,this is just as much a new frontier for ethics and risk assessment as it is for emerging technology. So which issues and conversations keep AI experts up at night?
優化物流、檢測欺詐、創作藝術、開展研究、提供翻譯:智能機器系統正在改善我們的生活。隨著這些系統變得更能干,我們的世界變得更高效,進而更富有。
[2]諸如 Alphabet、亞馬遜、臉書、IBM和微軟這樣的科技巨頭,以及諸如史蒂芬·霍金和埃隆·馬斯克這樣的人士相信,現在正是討論人工智能無限前景的好時機。從許多方面來看,這既是新興技術,也是倫理和風險評估的一個新的前沿。那么是哪些問題和討論讓人工智能專家們睡不著覺呢?
[3]勞工階層主要關注自動化問題。當我們發明了工作自動化的方法時,我們可以為人們創造機會來擔任更復雜的角色,從主導前工業時代的體力勞動,轉到全球化社會中戰略和行政工作特有的認知勞動。
[4]以卡車運輸為例:目前僅在美國就有數百萬人從事該職業。如果特斯拉的埃隆·馬斯克所承諾的無人駕駛卡車在未來十年能夠廣泛應用,他們怎么辦?但在另一方面,如果我們降低事故風險,無人駕駛卡車似乎是一種合乎道德的選擇。同樣的情形也可能適用于辦公人員和發達國家的大多數勞動力。
[5]這取決于我們如何利用我們的時間。大多數人仍然依靠用時間來換取收入,以維持自己和家庭的生活。我們只能希望這個機會能幫人們從非勞力的活動中找到意義,比如照顧家庭,融入社區,或者學習新的方式為人類社會做出貢獻。
[6]如果我們成功過渡,某天我們可能會回頭發覺,僅僅為了謀生而出賣大部分醒著的時間是多么愚昧。
[3] The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the preindustrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.
[4] Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade?But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.
[5] This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families.We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.
[6] If we succeed with the transition,one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.
[7] Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artif i cial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.
[8] We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.

[7]雖然人工智能的處理速度和能力遠遠超越人類,但不能信任它是永遠公正和中立的。谷歌及其控股集團Alphabet是人工智能的領先者之一,其提供的谷歌照片服務是人工智能的一種,主要用于識別人物、物體和場景。但這會出錯,比如一臺相機沒能標記種族敏感信息,或者預測未來犯罪的軟件表現出對黑人的偏見。
[8]我們不要忘記,人工智能系統是由有偏見、武斷的人類所創造的。再說,如果正確使用,或者用于努力實現社會進步,人工智能會成為積極變革的催化劑。
[9]一項科技變得越強大,它越會被用于善良抑或邪惡目的。這不僅指用于取代人類士兵的機器人或自主武器,而且也指那些如被惡意使用會帶來破壞的人工智能系統。由于這些戰斗并不只在戰場上發生,網絡安全將變得尤為重要。畢竟,我們應對的是一個速度和殺傷力比我們大幾個數量級的系統。
[9] The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously.Because these fights won’t be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.
[10] It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us?This doesn’t mean by turning “evil”in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle3在英語里let the genie out of the bottle本身就比喻to allow something evil to happen that cannot then be stopped。” that can fulfill wishes, but with terrible unforeseen consequences.
[11] In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing,it spits out a formula that does, in fact,bring about the end of cancer—by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended i
[10]我們不僅要提防對手。如果人工智能本身亦背叛我們呢?這不是指像人類一樣變“邪惡”,也不會像好萊塢電影里描繪的人工智能災難那樣。相反,我們可以預想到一個像“瓶子里的精靈”一樣的發達的人工智能系統,能實現愿望,但會有可怕的不可預見的后果。
[11]對機器來說,實現愿望的過程中不太可能產生惡意,只是缺乏對愿望范疇的全面理解。試想一個人工智能系統被要求根除全世界的癌癥。經過大量的計算,它搞出一個方案,事實上,的確可以根除癌癥——殺死地球上的所有人。計算機可以非常有效地實現“再無癌癥”的目標,但卻
[12]人類能夠處于食物鏈頂端,并不是因為有尖利的牙齒或強肌肉。人類的主導地位幾乎完全取決于我們的聰明才智。我們可以勝過更大、更快、更強壯的動物,是因為我們能創造并使用工具來控制它們:既有籠子和武器之類的物理工具,也有訓練和調理等認知工具。
[13]這就產生了一個關于人工智能的嚴肅問題:會不會有一天,人工智能對我們也有相同的優勢?我們也沒法指望“拔插頭”,因為一臺足夠先進的機器會預見到這一舉動并保護自己。 這就是所謂的“奇點”:人類不再是地球上最聰明生物的時間點。
[14]神經科學家仍在努力破解意識的秘密,我們也越來越多地了解獎勵和厭惡的基本原理。我們甚至與智力低下的動物共用這種機制。某種程度上,我們正在人工智能系統中建立類似的獎勵和厭惡機制。例如,強化學習類似于訓練狗:通過虛擬獎勵來提升表現。
[12] The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.
[13] This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us?We can’t rely on just “pulling the plug”either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the“singularity”: the point in time when human beings are no longer the most intelligent beings on earth.
[14] While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals.In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.
[15] Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful“survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?
[16] Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence?Will we consider the suffering of“feeling” machines?
[17] Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us. ■
[15]現今,這些系統是相當膚淺的,但是它們正變得越來越復雜和逼真。我們可以認為一個系統正為自我負面評價痛苦么?更甚者,在所謂的遺傳算法中,一次性創建一種體系的多個實例,僅使其中最成功的那些“存活”、結合并形成下一代的實例,讓其經過許多世代,是改進一種體系的方式。不成功的實例被刪除。什么時候我們可以認為,遺傳算法其實是一種形式的大規模謀殺?
[16]一旦我們將機器視為可以感知、感覺和行為的實體,那么思考其法律地位就迫在眉睫了。他們應該像擁有類同智慧的動物一樣被對待嗎?我們會考慮“有感覺的”機器的痛苦嗎?
[17]一些道德問題是關于減輕痛苦的,一些是關于承擔不良后果風險的。在考慮這些風險的同時,我們也應該記住,這項技術的進步,總體上意味著帶給每個人更好的生活。人工智能具有巨大的潛力,而我們要對其實施負責。 □