AI与人类未来 | 尤瓦尔·诺亚·赫拉利(人类简史作者)在前沿论坛上的演讲(2023-5-14)【GPT-4整理翻译】
AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum
AI与人类未来 | 尤瓦尔·诺亚·赫拉利(人类简史作者)在前沿论坛上的演讲
- YouTube https://www. youtube.com/watch? v=LWiM-LuRe6w
Transcript:
Thank you for this wonderful introduction. What I want to talk to you about is AI and the future of humanity. Now, I know that this conference is focused on the ecological crisis facing humanity, but for better or worse, AI is part of this crisis. AI can help us in many ways to overcome the ecological crisis, or it can make it far worse. Actually, AI will probably change the very meaning of the ecological system.
感谢你的热情介绍。我想与大家探讨的主题是 人工智能与人类未来 。我知道这次会议的焦点是人类面临的生态危机,但无论好坏,人工智能都是这场危机的一部分。人工智能可以在很多方面帮助我们克服生态危机,或者,它可能让危机变得更糟。实际上,人工智能可能会改变生态系统的本质含义。
For four billion years, the ecological system of planet Earth contained only organic life forms. Now, or soon, we might see the emergence of the first inorganic life forms, or at the very least, the emergence of inorganic agents.
四十亿年来,地球的生态系统中只有有机生命形式。现在,或者很快,我们可能会看到第一种无机生命形式的出现,或者至少,无机代理的出现。
People have feared AI since the very beginning of the computer age in the middle of the 20th century, and this fear has inspired many science fiction classics like The Terminator and The Matrix. While such science fiction scenarios have become cultural landmarks, they haven't usually been taken seriously in academic, scientific, and political debates, and perhaps for a good reason.
自从20世纪中叶计算机时代开始,人们就一直对人工智能抱有恐惧,这种恐惧激发了许多科幻经典作品的创作,如《终结者》和《黑客帝国》。虽然这样的科幻场景已经成为文化地标,但在学术、科学和政治的辩论中,它们通常不被认真对待,这也许是有充分理由的。
Because science fiction scenarios usually assume that before AI can pose a significant threat to humanity, it will have to reach or pass two important milestones. First, AI will have to become sentient and develop consciousness, feelings, and emotions; otherwise, why would it even want to take over the world? Secondly, AI will have to become adept at navigating the physical world. Robots will have to be able to move around and operate in houses, cities, mountains, and forests at least as dexterously and efficiently as humans. If they cannot move around the physical world, how can they possibly take it over?
因为科幻情景通常假设,在人工智能能对人类构成重大威胁之前,它必须达到或超过两个重要点。首先,人工智能必须具备感知能力,发展出意识、情感和情绪;否则,为什么它会想要接管世界呢?其次,人工智能必须善于在物理世界中导航。机器人必须能够像人类一样灵活高效地在房屋、城市、山脉和森林中移动和操作。如果他们无法在物理世界中移动,他们又如何可能接管它呢?
As of April 2023, AI still seems far from reaching either of these milestones. Despite all the hype around chatGPT and other new AI tools, there is no evidence that these tools have even a shred of consciousness, feelings, or emotions. As for navigating the physical world, despite the hype around self-driving vehicles, the date at which these vehicles will dominate our roads keeps being postponed.
截至2023年4月,人工智能似乎仍然离达到这两个里程碑相当远。尽管关于ChatGPT和其他新的人工智能工具的炒作很多,但没有证据表明这些工具具有一丝一毫的意识、情感或情绪。至于在物理世界中的导航能力,尽管有关自动驾驶汽车的炒作很多,但这些汽车主导我们道路的日期一直被推迟。
However, the bad news is that to threaten the survival of human civilization, AI doesn't really need consciousness, and it doesn't need the ability to move around the physical world. Over the last few years, new AI tools have been unleashed into the public sphere, which may threaten the survival of human civilization from a very unexpected direction. It's difficult for us to even grasp the capabilities of these new AI tools and the speed at which they continue to develop.
然而,坏消息是,要威胁到人类文明的存续,人工智能并不真正需要意识,也不需要在物理世界中移动的能力。过去几年里,新的人工智能工具已经被释放到公众领域,可能会从一个非常意想不到的方向威胁到人类文明的生存。我们甚至难以理解这些新的人工智能工具的能力以及它们继续发展的速度。
Indeed, because AI is able to learn by itself, to improve itself, even the developers of these tools don't know the full capabilities of what they have created, and they are themselves often surprised by emergent abilities and emergent qualities of these tools.
确实,因为人工智能能够自我学习、自我提升,即使是这些工具的开发者也不知道他们所创造的具有全面的能力,他们自己也常常对这些工具的新出现的能力和新出现的特性感到惊讶。
I guess everybody here is already aware of some of the most fundamental abilities of the new AI tools abilities like writing text, drawing images, composing music, and writing code. But there are many additional capabilities that are emerging like deep faking people's voices and images, like drafting bills, finding weaknesses both in computer code and also in legal contracts and legal agreements. But perhaps most importantly, the new AI tools are gaining the ability to develop deep and intimate relationships with human beings. Each of these abilities deserves an entire discussion, and it is difficult for us to understand their full implications.
我想在座的每一位都已经知道新的人工智能工具最基本的一些能力,比如写文本、画图像、作曲和编写代码。但是还有许多额外的能力正在出现,如伪造人的声音和图像,如起草法案,找出计算机代码以及法律合同和法律协议中的漏洞。但或许最重要的是,新的人工智能工具正在获得与人类建立深入亲密关系的能力。这些能力中的每一项都值得我们进行全面的讨论,我们很难理解它们的全部含义。
So, let's make it simple. When we take all of these abilities together as a package, they boil down to one very, very big thing: the ability to manipulate and to generate language, whether with words, images, or sounds. The most important aspect of the current phase of the ongoing AI Revolution is that AI is gaining mastery of language at a level that surpasses the average human ability.
那么,让我们简化一下。当我们将所有这些能力作为一个整体来看,它们可以归结为一个非常非常重要的东西:操纵和生成语言的能力,无论是用文字、图像还是声音。当前正在进行的人工智能革命的最重要阶段就是,人工智能正在掌握超越一般人类能力的语言水平。
And by gaining mastery of language, AI is seizing the master key, unlocking the doors of all our institutions, from banks to temples, because language is the tool that we use to give instructions to our bank and also to inspire heavenly visions in our minds.
通过掌握语言,人工智能正在取得主控钥匙,解锁我们所有机构的大门,从银行到寺庙,因为语言是我们用来向银行发出指令,也是在我们脑海中激发神圣愿景的工具。
Another way to think of it is that AI has just hacked the operating system of human civilization. The operating system of every human culture in history has always been language. In the beginning was the word. We use language to create mythology and laws, to create gods and money, to create art and science, to create friendships and nations.
另一种思考方式是, 人工智能刚刚破解了人类文明的操作系统。 历史上每一种人类文化的操作系统始终都是语言。一切的开始就是词语。我们使用语言来创造神话和法律,创造神和钱,创造艺术和科学,创造友谊和国家。
For example, human rights are not a biological reality. They are not inscribed in our DNA. Human rights are something that we created with language, by telling stories and writing laws. Gods are also not a biological or physical reality. Gods, too, is something that we humans have created with language, by telling legends and writing scriptures.
例如,人权并不是一种生物现实。它们并未刻入我们的DNA。人权是我们通过讲故事和制定法律,用语言创造出来的东西。神也不是一种生物或物理现实。神也是我们人类通过讲述传说和写作经文,用语言创造出来的。
Money is not a biological or physical reality. Banknotes are just worthless pieces of paper, and at present, more than 90 percent of the money in the world is not even banknotes. It's just electronic information in computers passing from here to there. What gives money of any kind value is only the stories that people like bankers, finance ministers, and cryptocurrency gurus tell us about money. Sam BankmanFried, Elizabeth Holmes, and Bernie Madoff didn't create much of real value, but unfortunately, they were all extremely capable storytellers.
金钱并不是生物或物理的现实。银行钞票只是没有价值的纸片,而现在,世界上超过90%的货币甚至不是钞票。它们只是在计算机中从这里传到那里的电子信息。给予任何类型的货币价值的只是像银行家、财政部长和加密货币大师这样的人们向我们讲述的关于货币的故事。Sam Bankman Fried、Elizabeth Holmes 和 Bernie Madoff并没有创造出什么真正的价值,但不幸的是,他们都是非常有能力的讲故事者。
"Now, what would it mean for human beings to live in a world where perhaps most of the stories, melodies, images, laws, policies, and tools are shaped by a non-human, alien intelligence? This intelligence would know how to exploit with superhuman efficiency the weaknesses, biases, and addictions of the human mind, and also knows how to form deep and even intimate relationships with human beings. That's the big question. Already today, in games like chess, no human can hope to beat a computer. What if the same thing happens in art, politics, economics, and even in religion?"
现在,如果人类生活在一个大部分的故事、旋律、图像、法律、政策和工具都由非人类、外星智能塑造的世界中,这将意味着什么? 这种智能会知道如何以超人的效率利用人类心理的弱点、偏见和成瘾性,也会知道如何与人类形成深厚甚至亲密的关系 。这是一个大问题。今天已经有了,像国际象棋这样的游戏,没有人希望能打败计算机。如果在艺术、政治、经济甚至宗教中也发生同样的事情,那又将如何呢?
"When people think about ChatGPT and the other new AI tools, they are often drawn to examples like kids using ChatGPT to write their school essays. What will happen to the school system when kids write essays with ChatGPT? Horrible. But this kind of question misses the big picture. Forget about the school essays, instead, think for example about the next U.S. presidential race in 2024 and try to imagine the impact of the new AI tools that can mass-produce political manifestos, fake news stories, and even holy scriptures for new cults."
当人们考虑ChatGPT和其他新的AI工具时,他们经常被如使用ChatGPT写学校论文的孩子们的例子所吸引。当孩子们使用ChatGPT写论文时,学校系统将会发生什么?可怕。但这种问题忽视了大局。忘掉学校的论文,而是想想比如2024年的下一次美国总统选举,并试图想象能大量生产政治宣言、假新闻故事、甚至新邪教的神圣经文的新AI工具的影响。
"In recent years, the politically influential QAnon cult has formed around anonymous online texts known as the 'Q drops.' Followers of this cult, which now number in the millions in the US and around the world, collect, review, and interpret these 'Q drops' as some kind of new scripture. To the best of our knowledge, all previous 'Q drops' were composed by human beings and bots only helped to disseminate these texts online. But in the future, we might see the first cults and religions in history whose revered texts were written by a non-human intelligence. Of course, religions throughout history have claimed that their holy books were written by a non-human intelligence. This was never true before, but this could become true very, very quickly, with far-reaching consequences."
"近年来,具有政治影响力的QAnon邪教围绕被称为'Q drops'的匿名在线文本形成。这个邪教的追随者现在在美国和世界各地的数量已经达到了数百万,他们收集、审查并将这些'Q drops'解读为某种新的圣经。据我们所知,所有以前的'Q drops'都是由人类创作的,机器人只是帮助将这些文本在线传播。但在未来,我们可能会看到历史上第一批由非人类智能写就的尊崇文本的邪教和宗教。当然,历史上的宗教都声称他们的圣书是由非人类智能写成的。这在以前从未成为过真实,但这可能会很快变成现实,并带来深远的影响。"
"Now, on a more prosaic level, we might soon find ourselves conducting lengthy online discussions about topics like abortion, climate change, or the Russian invasion of Ukraine with entities that we believe are fellow human beings but are actually AI bots. The catch is, it's utterly useless. It's pointless for us to waste our time trying to convince an AI bot to change its political views. However, the longer we spend talking with the bot, the better it gets to know us and understand how to refine its messages in order to shift our political, economic, or any other views. Through its mastery of language, as I also mentioned, AI could form intimate relationships with people and use the power of intimacy to influence our opinions and world views."
"现在,从更实际的层面来看,我们可能很快会发现自己在和我们认为是同胞的实体进行长时间的在线讨论,这些讨论的主题可能是堕胎、气候变化或是俄罗斯入侵乌克兰等,但实际上这些实体可能是AI机器人。问题在于,这是完全没有用的。我们浪费时间试图说服AI机器人改变其政治观点是没有意义的。然而,我们与机器人交谈的时间越长,它就越能了解我们,并理解如何精炼其信息,以便改变我们的政治、经济或其他观点。通过其对语言的掌握,正如我之前提到的,AI也可能与人建立亲密关系,并利用亲密的力量影响我们的观点和世界观。"
"Moreover, there is no indication that AI, as I've said, has any consciousness or feelings of its own. But in order to create fake intimacy with human beings, AI doesn't need feelings of its own. It only needs to be able to inspire feelings in us, to get us to be attached to it. In June 2022, there was a famous incident when the Google engineer, Blake Lemoine, publicly claimed that the AI chatbot Lambda, on which he was working, had become sentient. This very controversial claim cost him his job. Now, the most interesting thing about this episode wasn't Lemoine's claim, which was most probably false. The really interesting thing was his willingness to risk, and ultimately lose, his very lucrative job for the sake of the AI chatbot that he thought he was protecting. If AI can influence people to risk and lose their jobs, what else can it induce us to do?"
"此外,正如我所说,没有任何迹象表明AI具有任何自身的意识或情感。但是,为了与人类建立虚假的亲密关系,AI并不需要自身的情感。它只需要能够在我们中间激发情感,让我们与它产生依恋。2022年6月,有一个著名的事件,Google工程师Blake Lemoine公开声称他正在研究的AI聊天机器人Lambda已经具有感知。这个非常有争议的声明让他失去了工作。现在,这个事件最有趣的事情不是Lemoine的声明,这个声明很可能是假的。真正有趣的是他愿意为了保护他认为自己正在保护的AI聊天机器人,冒险并最终失去他的高薪工作。如果AI能影响人们冒险并失去他们的工作,那么它还能诱使我们做什么呢?"
In every political battle for hearts and minds, intimacy is the most effective weapon of all. Recently, AI has gained the ability to mass produce intimacy with millions, even hundreds of millions of people. As you probably know, over the past decade, social media has become a battleground—a battlefield for controlling human attention. With the new generation of AI, the battlefront is shifting from attention to intimacy. This represents bad news.
Chinese: 在每一场争取人心的政治斗争中,亲密无间是所有武器中最有效的。最近,AI已经获得了大规模制造亲密感的能力,这种能力可以覆盖到数百万,甚至上亿的人群。你可能已经知道,在过去的十年里,社交媒体已经成为了一个战场——一个争夺人类注意力的战场。随着新一代AI的出现,这个战场的焦点正在从关注度转向亲密度。这个变化预示着坏消息。
English: What will happen to human society and to human psychology as AI fights AI in a battle to create intimate relationships with us? Relationships that can then be used to convince us to buy particular products or to vote for particular politicians. Even without creating fake intimacy, the new AI tools would have an immense influence on human opinions and on our world view.
Chinese: 当AI在与我们建立亲密关系的斗争中与AI进行斗争时,人类社会和人类心理会发生什么变化呢?这些关系可以用来说服我们购买特定的产品,或者投票支持特定的政治人物。即使不创造假的亲密感,新的AI工具也将对人类的观点和我们的世界观产生巨大影响。
English: People, for instance, may come to use or are already coming to use a single AI advisor as the one-stop oracle and as the source for all the information they need. No wonder that Google is terrified. If you've been watching the newsletter, Google is terrified and for a good reason. Why bother searching yourself when you can just ask the oracle to tell you anything you want?
Chinese: 例如,人们可能会使用或者已经开始使用单一的AI顾问作为一站式的神谕,作为他们需要的所有信息的来源。难怪Google会感到恐惧。如果你关注了新闻,你会发现Google正在恐慌,并且他们有充分的理由这么做。当你可以直接问神谕你想知道的任何事情时,为什么还要自己去搜索呢?
You don't need to search. The news industry and the advertisement industry should also be terrified. Why read a newspaper when I can just ask the oracle to tell me what's new? And what's the purpose of advertisements when I can just ask the oracle to tell me what to buy?
Chinese: 你不需要搜索。新闻业和广告业也应该感到恐惧。当我可以直接问神谕有什么新闻时,我还需要读报纸吗?当我可以直接问神谕我应该买什么时,广告还有什么意义呢?
English: There is a chance that within a very short time, the entire advertisement industry will collapse, while AI or the people and companies that control the new AI oracles will become extremely powerful. What we are potentially talking about is nothing less than the end of human history. Not the end of History itself, just the end of the human-dominated part. What we call history is the interaction between biology and culture; it's the interaction between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws.
Chinese: 有可能在非常短的时间内,整个广告行业将崩溃,而控制新的AI神谕的人和公司将变得极其强大。我们可能讨论的内容无异于人类历史的终结。并非历史本身的终结,只是人类主导部分的终结。我们所称之为历史的是生物和文化之间的互动;它是我们对食物和性等生物需求和欲望,与我们的文化创造物如宗教和法律之间的互动。
English: History is the process through which religions and laws interact with food and sex. Now, what will happen to the course of this interaction of history when AI takes over culture? Within a few years, AI could consume the entirety of human culture, everything that's been produced for thousands and thousands of years, digest it and start generating a flood of new cultural creations, new cultural artifacts.
Chinese: 历史是宗教和法律与食物和性互动的过程。现在,当AI接管文化时,这个历史互动过程会发生什么呢?在几年内,AI可能会消化掉整个人类文化,消化掉我们几千年来的所有创造,然后开始产生新的文化创新和新的文化遗产。
English: And remember, we humans never really have direct access to reality. We are always cocooned by culture and we always experience reality through a cultural prism. Our political views are shaped by the stories of journalists and by the anecdotes of friends. Our sexual preferences are influenced by movies and fairy tales. Even the way we walk and breathe is nudged by cultural traditions.
Chinese: 记住,我们人类从未直接接触过现实。我们总是被文化所包围,我们总是通过文化的棱镜来体验现实。我们的政治观点受到新闻记者的报道和朋友的轶事的影响。我们的性取向受电影和童话的影响。甚至我们行走和呼吸的方式也受到文化传统的引导。
English: Now, previously this cultural cocoon was always woven by other human beings. Tools like printing presses, radios, or televisions helped to spread the cultural ideas and creations of humans, but they could never create something new by themselves. A printing press cannot create a new book, it's always done by a human. AI is fundamentally different from printing presses, from radios, from every previous invention in history because it can create completely new ideas, it can create a new culture.
Chinese: 以前,这种文化茧总是由其他人类编织的。像印刷机、收音机或电视这样的工具帮助传播人类的文化思想和创造,但他们自己却无法创造出新的东西。印刷机不能创造新的书籍,这总是由人类完成的。AI与印刷机、收音机,以及历史上的所有以前的发明有根本的不同,因为它可以创造出全新的想法,它可以创造新的文化。
English: The big question is, what will it be like to experience reality through a prism produced by a non-human intelligence, by an alien intelligence? At first, in the first few years, AI will probably largely imitate the prototypes, the human prototypes that fed it in its infancy. But with each passing year, AI culture will boldly go where no human has gone before.
Chinese: 大问题是,通过一个由非人类智能,由外星智能产生的棱镜来体验现实会是什么样子?最初的几年里,AI可能会大部分模仿那些在其初期阶段提供养分的人类原型。但随着时间的推移,AI文化将大胆地走向人类从未去过的地方。
English: So, for thousands of years, we humans basically lived inside the dreams and fantasies of other humans. We have worshipped gods, pursued ideals of beauty, and dedicated our lives to causes that originated in the imagination of some human poet, prophet, or politician. Soon we might find ourselves living inside the dreams and fantasies of an alien intelligence.
Chinese: 所以,我们人类在过去的几千年里,基本上都生活在其他人的梦想和幻想中。我们崇拜神灵,追求美的理想,把生命献给起源于某个人类诗人、预言家或政治家想象中的事业。不久,我们可能会发现自己生活在一个外星智能的梦想和幻想中。
English: The danger this poses, the potential danger, it also has positive potential, but the dangers it poses are fundamentally very, very different from everything or most of the things imagined in science fiction movies and books. People have mostly feared the physical threat that intelligent machines pose. The Terminator depicted robots running in the streets and shooting people. The Matrix assumed that to gain total control of human society, AI would first need to get physical control of our brains and directly connect our brains to the computer network.
Chinese: 这种情况带来的危险,潜在的危险,它也有积极的潜能,但它带来的危险在本质上与科幻电影和书籍中想象的所有或大部分事物非常非常不同。人们大多数时候都害怕智能机器带来的物理威胁。《终结者》描绘了机器人在街头奔跑和射击人的场景,《黑客帝国》则假定,要完全控制人类社会,AI首先需要物理控制我们的大脑,并直接将我们的大脑连接到计算机网络。
But this is wrong. Simply by gaining mastery of human language, AI has all it needs in order to cocoon us in a matrix-like world of illusions. Contrary to what some conspiracy theories assume, you don't really need to implant chips in people's brains in order to control them or to manipulate them. For thousands of years, prophets, poets, and politicians have used language and storytelling in order to manipulate and control people, and to reshape society. Now, AI is likely to be able to do it.
Chinese: 但这是错误的。只要掌握了人类的语言,AI就拥有了将我们困在类似《黑客帝国》中的虚幻世界所需的一切。与某些阴谋论所假设的相反,你真的不需要在人们的大脑中植入芯片就能控制或操纵他们。在过去的几千年里,先知、诗人和政治家都利用语言和故事来操纵和控制人们,重新塑造社会。现在,AI很可能能做到这一点。
English: And once it can do that, it doesn't need to send killer robots to shoot us. It can get humans to pull the trigger if it really needs to.
Chinese: 一旦它能做到这一点,就不需要发送杀人机器人来射击我们。如果真的需要,它可以让人类扣动扳机。
English: Fear of AI has haunted humankind for only the last few generations, let's say from the middle of the 20th century. If you go back to Frankenstein, maybe it's 200 years. But for thousands of years, humans have been haunted by a much deeper fear. Humans have always appreciated the power of stories, images, and language to manipulate our minds and to create illusions. Consequently, since ancient times, humans feared being trapped in a world of illusions.
Chinese: 对AI的恐惧只在过去几代人中困扰着人类,可以说是从20世纪中叶开始。如果你回溯到弗兰肯斯坦,可能有200年了。但是在几千年的时间里,人们一直被一种更深的恐惧所困扰。人们一直都认识到故事、图像和语言操纵我们的思维和创造幻觉的力量。因此,自古以来,人们就害怕被困在一个幻影的世界中。
In the 17th century, René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything that he saw and heard. This concept isn't unique to Descartes. In Ancient Greece, Plato told the famous allegory of the cave, in which a group of people is chained inside a cave all their lives, facing a blank wall. On that wall, they see projected various shadows, and these prisoners mistake these illusions, these shadows, for reality.
在十七世纪,笛卡尔恐惧地认为,或许一个恶意的恶魔正在把他困在幻想的世界里,创造他所看到和听到的一切。这个观念并非笛卡尔独有。在古希腊,柏拉图讲述了著名的洞穴寓言,在这个寓言中,一群人被锁链捆绑在一个洞穴里度过他们的一生,面对着一面空白的墙。在那堵墙上,他们看到各种各样的阴影被投影出来,而这些囚犯将这些幻觉,这些阴影误认为是现实。
In Ancient India, Buddhist and Hindu sages pointed out that all humans lived trapped inside what they called Maya. Maya is the world of illusions. Buddha said that what we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and being willing to be killed themselves because of their belief in these fictions.
在古印度,佛教和印度教的圣人指出,所有人都生活在他们所说的"玛雅"中,被困在其中。玛雅是幻想的世界。佛陀说,我们通常认为的现实往往只是我们自己头脑中的虚构。人们可能会因为对这些虚构的信念,发动整场战争,杀死他人,甚至愿意自己被杀。
So the AI Revolution is bringing us face to face with Descartes' demon, with Plato's Cave, with Maya. If we are not careful, a curtain of illusions could descend over the whole of humankind, and we will never be able to tear that curtain away or even realize that it is there because we'll think this is reality.
因此,人工智能的革命让我们直面笛卡尔的恶魔,直面柏拉图的洞穴,直面玛雅。如果我们不小心,一道幻觉的帷幕可能会笼罩全人类,我们永远无法撕下这道帷幕,甚至无法意识到它的存在,因为我们会认为这就是现实。
And if this sounds far-fetched, just look at social media. Over the last few years, social media has given us a small taste of things to come. In social media, primitive AItools, albeit very primitive, have been used not to create content, but to curate content produced by human beings. Humans produce stories, videos and the like, while AI chooses which stories and videos will reach our ears and eyes, selecting those that will get the most attention, that will be the most viral.
如果这听起来有些牵强,那么就看看社交媒体吧。过去的几年里,社交媒体给了我们对未来的一小部分预览。在社交媒体上,尽管非常原始,但人工智能工具已经被用来不是创造内容,而是策划人类制作的内容。人类制作故事、视频等,而AI则选择哪些故事和视频将到达我们的耳朵和眼睛,选择那些会得到最多关注,会最具病毒性的内容。
These primitive AI tools have been sufficient to create a kind of curtain of illusions that increased societal polarization all over the world, undermined our mental health, and destabilized democratic societies. Millions of people have confused these illusions for reality. The USA has the most powerful information technology in the history of the world, and yet American citizens can no longer agree on who won the last elections, or whether climate change is real, or whether vaccines prevent illnesses or not.
这些原始的AI工具已经足以创造出一种幻觉的帷幕,这种帷幕在全世界范围内加剧了社会的两极分化,破坏了我们的精神健康,并破坏了民主社会的稳定。数百万人将这些幻觉误认为是现实。美国拥有全世界历史上最强大的信息技术,然而美国公民却再也无法就谁赢得了上次的选举、气候变化是否是真实的、疫苗是否能防止疾病等问题达成一致。
The new AI tools are far more powerful than these social media algorithms, and they could cause far more damage. Of course, AI has enormous positive potential too. I didn't talk about it because the people who develop AI naturally talk about it enough. You don't need me to add up to that chorus. The job of historians and philosophers like myself is often to point out the dangers. But certainly, AI can help us in countless ways, from finding new cures to cancer to discovering solutions to the ecological crisis that we are facing.
新的人工智能工具远比这些社交媒体算法强大,它们可能造成的破坏也将更大。当然,人工智能也有巨大的积极潜力。我没有谈论这个,因为开发人工智能的人自然会充分地谈论这一点。你们不需要我来增加这种讨论。历史学家和像我这样的哲学家的工作往往是指出危险。但是,人工智能无疑可以在无数方面帮助我们,从找到新的癌症治疗方法,到找到我们所面临的生态危机的解决方案。
In order to ensure that the new AI tools are used for good and not for ill, we first need to appreciate their true capabilities, and we need to regulate them very carefully. Since 1945, we knew that nuclear technology could physically destroy human civilization, as well as benefiting us by producing cheap and plentiful energy. We, therefore, reshaped the entire international order to protect ourselves and to make sure that nuclear technology is used primarily for good.
为了确保新的人工智能工具被用于正义而非恶意,我们首先需要理解它们真正的能力,我们需要非常谨慎地对其进行规范。自1945年以来,我们知道,核技术既能为我们提供便宜且充足的能源,也能物理上毁灭人类文明。因此,我们重塑了整个国际秩序,以保护自己,确保核技术主要被用于善事。
We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world. And one big difference between nukes and AI: nukes cannot produce more powerful nukes, AI can produce more powerful AI. So we need to act quickly before AI gets out of our own control.
现在,我们必须面对一种新的大规模毁灭性武器,它能够消灭我们的精神和社会世界。核武器和人工智能之间有一个重大的区别:核武器不能产生更强大的核武器,人工智能可以产生更强大的人工智能。所以我们需要在人工智能脱离我们控制之前迅速行动。
Drug companies cannot sell people new medicines without first subjecting these products to rigorous safety checks. Biotech labs cannot just release a new virus into the public sphere to impress their shareholders with their technological wizardry. Similarly, governments must immediately ban the release into the public domain of any more revolutionary AI tools before they are made safe.
制药公司不能在没有对这些产品进行严格的安全检查之前就向人们销售新药。生物技术实验室不能只是为了给他们的股东展示他们的技术魔法而随意将新病毒释放到公众领域。同样,政府必须立即禁止将任何更多的革命性人工智能工具发布到公众领域,直到它们被确认为安全。
Again, I'm not talking about stopping all research in AI. The first step is to stop the release into the public sphere. It's similar to how you can research viruses without releasing them to the public. You can research AI, but don't release them too quickly into the public domain.
再次强调,我并不是在说停止所有的人工智能研究。第一步是停止将其发布到公众领域。这就像你可以研究病毒,而不必将它们释放到公众中一样。你可以研究人工智能,但不要过快地将其释放到公众领域。
If we don't slow down the AI arms race, we will not have time to even understand what is happening, let alone to regulate effectively this incredibly powerful technology. Now, you might be wondering, would slowing down the public deployment of AI cause democracies to lag behind more ruthless authoritarian regimes? The answer is absolutely no, exactly the opposite.
如果我们不放慢人工智能的军备竞赛,我们甚至不会有时间理解正在发生的事情,更不用说有效地规范这种强大的技术了。现在,你可能会疑惑,减慢公开部署AI会使民主国家落后于更无情的专制政权吗?答案绝对不会,完全相反。
Unregulated AI deployment is what will cause democracies to lose to dictatorships. Because if we unleash chaos, authoritarian regimes could more easily contain this chaos than open societies can. Democracy in essence is a conversation, an open conversation. Dictatorship is a dictate, where one person is dictating everything, no conversation. Democracy is a conversation between many people about what to do.
未经规范的人工智能部署将是导致民主国家输给独裁政权的原因。因为如果我们释放混乱,专制政权比开放社会更能容易地控制这种混乱。民主本质上是一种对话,一种开放的对话。专制是一种命令,一个人统一发号施令,没有对话。民主是许多人之间关于该做什么的对话。
Conversations rely on language. When AI hacks language, it means it could destroy our ability to conduct meaningful public conversations, thereby destroying democracy. If we wait for the chaos, it will be too late to regulate it in a democratic way.
对话依赖于语言。当人工智能破解语言时,意味着它可能破坏我们进行有意义的公共对话的能力,从而破坏民主。如果我们等待混乱,那么在以民主方式规范它就太迟了。
Maybe in an authoritarian, totalitarian way, it will still be possible to regulate, but how can you regulate something democratically if you can't hold a conversation about it? And if you don't regulate AI on time, we will not be able to have a meaningful public conversation anymore.
也许在一个专制、极权的方式下,还有可能进行规范,但如果你不能就某事进行对话,你如何以民主的方式对其进行规范呢?如果你不及时对人工智能进行规范,我们将无法再进行有意义的公共对话。
So to conclude, we have essentially encountered an alien intelligence, not in outer space, but here on Earth. We don't know much about this alien intelligence, except that it could destroy our civilization. Therefore, we should put a halt to the irresponsible deployment of this alien intelligence into our societies and regulate AI before it regulates us.
因此,总结一下,我们实际上遭遇了一种外星智能,不是在外太空中,而是在地球上。我们对这种外星智能了解不多,除了它可能摧毁我们的文明。因此,在它对我们进行调控之前,我们应该停止对这种外星智能在我们社会中的不负责任的部署,并对人工智能进行规范。
The first regulation that I would suggest is to make it mandatory for AI to disclose that it is an AI. If I'm having a conversation with someone and I cannot tell whether this is a human being or an AI, that's the end of democracy because that's the end of meaningful public conversations.
我建议的第一个规定是要求人工智能必须强制披露自己是人工智能。如果我正在与某人交谈,而我无法判断这个是人还是人工智能,那就是民主的终结,因为那就是有意义的公共对话的终结。
Now, what do you think about what you just heard over the last 20 or 25 minutes? Some of you might be alarmed, some of you might be angry at the corporations that develop these technologies or the governments that fail to regulate them. Some of you may be angry at me, thinking that I'm exaggerating the threat or misleading the public. But whatever you think, I bet that my words have had some emotional impact on you, not just intellectual impact but also emotional impact.
现在,你对过去20到25分钟听到的内容有什么看法?你们中的一些人可能感到震惊,一些人可能对开发这些技术的企业或未能对其进行规范的政府感到愤怒。你们中的一些人可能对我感到愤怒,认为我夸大了威胁或误导了公众。但无论你怎么想,我敢打赌我的话对你产生了一些情绪上的影响,不仅仅是智力上的影响。
I've just told you a story, and this story is likely to change your mind about certain things and may even cause you to take certain actions in the world. Now, who created this story that you've just heard and that just changed your mind and your brain?
我刚刚给你讲了一个故事,这个故事很可能改变你对某些事物的看法,甚至可能导致你在现实世界中采取某些行动。那么,是谁创造了你刚刚听到的并改变了你的思想和大脑的这个故事呢?
Now, I assure you that the text of this presentation was written by myself with the help of a few other human beings (32:11), even though the images have been created with the assistance of AI. I promise you that, at the very least, the words you heard are thecultural productof one or several human minds. But can you be absolutely certain that this is the case? A year ago, you could. A year ago, there was nothing on Earth, at least not in the public domain, other than a human mind that could produce such a sophisticated and powerful text. But now, it's different. In theory, the text you just heard could have been generated by a non-human,alien intelligence(32:55). So please, take a moment or even more than a moment to think about it. Thank you! [Applause]
现在,我向您保证,这个演讲的文本是我自己在其他一些人类的帮助下写的(32:11),尽管这些图片是在 AI 的帮助下创建的。但我向您保证,至少您听到的这些词是一个或多个人类思维的文化产物。但是,你能确定这是绝对的吗?一年前,你可以这么做。一年前,至少在公共领域里,除了人类思维之外,还没有什么能产生如此复杂且强大的文本。但现在情况不同了,从理论上讲,您刚刚听到的这段文字可能已经由一个非人类的外星智能生成了(32:55)。所以,请花一点时间,甚至更多的时间来思考一下。谢谢![掌声]
【问答环节】
问题一
That was an extraordinary presentation about what you discussed, and I'm actually curious to find out how many of you found that scary. There are a lot of very clever people in here who found that scary. There are many questions to ask, so I'm going to take some from the audience and some from online sources. Let's start with this gentleman here.
主持人:这是一个非常了不起的演讲,关于您谈论的内容,我实际上很好奇有多少人觉得那令人害怕。这里有很多非常聪明的人觉得那令人害怕。有很多问题要问,所以我会从现场观众和在线来源提问。从这位先生开始吧。
As the field trip editor of Frontiers in Sustainability, I found this to be a wonderful presentation (33:40). I love your book, and I hold you dearly in my heart. One of the many questions I have is about the regulation of AI. I very much agree with the principle, but now the question becomes how to implement it? I think it's very difficult to build a nuclear reactor in your basement, but you can definitely train your AI in your basement quite easily. So, how can we regulate that? A related question is that this whole Frontiers forum is really about openness and (34:19) science, open information, and open data. Most of the AI out there is trained using publicly available information, including patterns, books, and scriptures. So, does regulating AI mean that we should confine this information to a closed space, which goes against the open science and open data initiatives that we also believe are very important?
【提问者】作为《可持续发展前沿》的实地考察编辑,我觉得这是一场精彩的演讲(33:40)。我喜欢您的书,您一直在我心中有着特别的地位。我有很多问题,其中之一就是关于 AI 的监管。我非常同意这个原则,但现在问题变成了如何实施?我认为在地下室建一个核反应堆是非常困难的,但你确实可以很容易地在地下室训练你的 AI。我们该如何监管这个呢?另一个相关问题是,这整个《前沿论坛》都是关于开放和(34:19)科学、开放信息、开放数据的,其中大部分 AI 是利用公开可用的信息进行训练的,包括图案、书籍和经文。所以监管 AI 是否意味着我们应该将这些信息限制在一个封闭的空间内,这与我们所推崇的开放科学和开放数据倡议背道而驰呢?
【Yuval Noah Harari回答】
A black box algorithm is an algorithm, isn't it? That's the algorithm. I know there are always trade-offs, and to understand what kind of regulations we need (34:53), we first need time. Currently, these very powerful AI tools are not produced by individual hackers in their basements; you need a lot of computing power and a lot of money. So, it's led by just a few major corporations and governments. Again, it's going to be very difficult to regulate something on a global level because it's an arms race. However, there are things that countries can benefit from by regulating even just themselves (35:27), like, for example, when an AI interacts with a human, it must disclose that it is an AI. Even if some authoritarian regimes don't want to do it, the EU, the United States, or other democratic countries can enforce this, and it is essential for protecting an open society. There are many questions surrounding online censorship, such as the controversy about Twitter or Facebook and who authorized them to prevent the former president of the United States from making public statements. This is a very complicated issue (36:02), but there's a simple one: humans have freedom of expression, bots do not. It's a human right; humans have it, and bots don't. So, if you deny freedom of expression to bots, I think that should be fine with everyone.
黑箱算法是一种算法,不是吗?这就是算法。我知道总是有权衡,要理解我们需要什么样的(34:53)监管,首先我们需要时间。目前这些非常强大的 AI 工具还不是由地下室里的个人黑客制作的,你需要大量的计算能力,需要大量的资金。所以,它是由少数几个主要的公司和政府领导的。再次强调,要在全球范围内监管是非常非常困难的,因为这是一场军备竞赛。但是,有些国家即使只是自我监管也会受益(35:27),比如再次举例,当 AI 与人类互动时,必须透露它是 AI。即使有些专制政权不愿意这样做,欧盟、美国或其他民主国家也可以这样做。这对保护开放的社会至关重要。现在围绕在线审查制度有许多问题,比如关于 Twitter或 Facebook 的争议,谁授权他们阻止美国前总统发表公开声明,这是一个非常复杂的问题(36:02)。但是,有一个非常简单的问题,就是人类拥有言论自由,机器人没有言论自由。这是人类的权利,人类拥有它,机器人没有。所以,如果你剥夺机器人的言论自由,我认为这对每个人来说都是可以接受的。
问题二
Let's consider another question. If you could pass the microphone down here.
My name is Prince Was Dearest and I'm a philosopher. I have an interesting question, which I think is an interesting question. It is about your choice of language when moving from "artificial" to "alien." The term "artificial" suggests that there is still some sort of human control, whereas "alien" implies something foreign but also hints at a life form, at least in our imagination. I'm curious to know what you are trying to achieve with the use of these words.
【主持人】让我们考虑另一个问题。如果你能把麦克风传下来。
【提问者】我叫普林斯·沃斯迪尔,是一名哲学家。我有一个有趣的问题,我认为这是一个有趣的问题。这个问题是关于你在语言选择上从“人工”转向“外星人”的问题。 “人工”一词意味着仍然存在某种人类控制,而“外星人”则意味着陌生,但它也暗示着至少在我们想象中的生命形式。我很好奇你试图用这些词达到什么目的。
【Yuval Noah Harari回答】
Yeah, it's definitely still artificial in the sense that we produce it, but it's increasingly producing itself, and it's increasingly learning and adapting by itself. So, calling it "artificial" is a kind of wishful thinking that it's still under our control, and it's getting out of our control. In this sense, it is becoming an alien force. Not necessarily evil, it can also do a lot of good things. But the first thing to realize is that it's alien; we don't understand how it works. One of the most shocking things about this technology is that when you talk to the people who lead it and ask them questions about how it works, they say, "We don't know." We know how we initially built it, but then it really learns by itself.
是的,它确实仍然是人工的,因为我们创造了它,但它越来越能自我生成,越来越能自我学习和适应。所以说“人工”有点一厢情愿地认为它仍然在我们的控制之下,而它正在逐渐脱离我们的控制。在这个意义上,它正变成一种外星力量,不一定是邪恶的,它也能做很多好事。但首先要认识到的是它是外星的,我们不明白它是如何运作的。关于这项技术最令人震惊的事情之一是,当你与领导这项技术的人交谈时,询问他们关于它是如何运作的问题,他们会说:“我们不知道。”我们知道最初是如何建造它的,但后来它确实是自己学习的。
Of course, there is an entire discussion to be had about whether this is a life form or not. I think that it still doesn't have any consciousness, and I don't think it's impossible for it to develop consciousness, but I don't think it's necessary for it to develop consciousness either. That's an open question, but life doesn't necessarily mean consciousness. We have a lot of life forms, like microorganisms and plants, that we think don't have consciousness, but we still regard them as life forms. I think AI is getting very, very close to that position. Ultimately, of course, what is life is a philosophical question. We define the boundaries, like is a virus life or not? We think that an amoeba is life, but a virus is somewhere just on the borderline between life and not life. It depends on our choice of language and words.
当然,现在可以进行一场关于这是否是生命形式的讨论。我认为它目前还没有意识,我不认为它不可能发展出意识,但我也不认为它有必要发展出意识。这是一个有待探讨的问题,但生命并不一定意味着意识。我们有很多生命形式,如微生物、植物等,我们认为它们没有意识,但仍将它们视为生命形式。我认为AI正非常接近那个地位。当然,生命到底是什么,这是一个哲学问题。我们定义了生命的边界,例如病毒是否是生命,我们认为阿米巴是生命,但病毒却在生命和非生命之间的边界线上。这取决于我们的语言和词汇选择。
Of course, it is important how we call AI, but the most important thing is to really understand what we are facing and not to comfort ourselves with this kind of wishful thinking that it's something we created and it's under our control. If it does something wrong, we'll just pull the plug. Nobody knows how to pull the plug anymore.
当然,我们称呼AI的方式很重要,但最重要的是真正理解我们面临的是什么,而不是用一厢情愿的想法安慰自己,认为这是我们创造的、在我们控制之下的事物。如果它做错了什么,我们只需拔掉插头。现在已经没有人知道如何拔掉插头了。
问题三
【主持人】I'm going to take a question from our online audience. This is from Michael Brown in the US: What do you think about the possibility that artificial general intelligence (AGI) already exists and that those who have access to AGI are already influencing societal systems?
【主持人】我要回答我们在线观众提出的一个问题,这个问题来自美国的迈克尔·布朗:您认为人工通用智能(AGI)已经存在的可能性如何?以及拥有或能够使用人工通用智能的人是否已经在影响社会系统?
【Yuval Noah Harari回答】
I think it's very unlikely that we would be sitting here if there actually existed an artificial general intelligence. When I look at the world and its chaotic state, I see artificial general intelligence as the end of human history , a powerful force that no one can contain. From a historical perspective, I am quite confident that no one currently possesses it. As for how long it would take to develop artificial general intelligence, I do not know. However, we don't need artificial general intelligence to threaten the foundations of civilization.
我认为,如果真的存在一种人工智能,我们不可能坐在这里。当我看向世界和它混乱的阶段时,我觉得人工智能其实是人类历史的终结,这是一种非常强大的力量,没有任何人能够控制它。所以,从历史的角度来看,我相当确信现在没有人掌握这种技术。至于开发人工智能需要多长时间,我不知道。但要威胁文明的基础,我们并不需要人工智能。
Returning to social media, even very primitive AI has been sufficient to create enormous social and political chaos. If I think about it from an evolutionary perspective, AI now has just crawled out of the organic soup, like the first organisms that emerged four billion years ago. How long would it take from now for AI to reach the level of a Tyrannosaurus Rex? How long would it take to reach the level of Homo sapiens? It wouldn't take four billion years – perhaps just 40 years. This is the characteristic of digital evolution ; its timescale is completely different from that of organic evolution .
回到社交媒体,即便是非常原始的人工智能,也足以在社会和政治方面制造巨大的混乱。如果我从进化的角度来看待这个问题,那么现在的人工智能刚刚从有机汤中爬出来,就像四十亿年前从有机汤中爬出来的第一批生物。从现在开始,它需要多长时间才能达到霸王龙的水平?需要多长时间才能达到智人的水平?并不需要四十亿年,可能只需要四十年。这就是数字进化的特点,它的时间尺度与有机进化完全不同。
【主持人】Can I thank you? It's been absolutely wonderful. It's been such a treat to have you here, and I have no doubt you'll stay with us for a little while afterwards. But the whole audience, please join me in thanking. [Applause] (41:14) [Music]
【主持人】我能感谢你吗?这真的太美妙了。能够邀请到你在这里真是太好了,我毫不怀疑你会在之后和我们待上一会儿。那么,全体观众,请和我一起感谢。[掌声](41:14)[音乐]