HOSTILE TAKEOVER? Believing in the innocence of AI may put us at risk. The amount of clichés about “killer robots” and hostility are abundant, but that does not mean that we should not plan for the very worst. Many have advocated for the use of the precautionary principle when considering the negatives of AI, but in this case the negatives are not that innocent – it may pose a threat to the continued existence of Homo Sapiens on this planet. The brilliant, world-renowned Israeli historian and futurologist, Yuval Noah Harari, outlined some of the theories and dangers about the future in his book Homo Deus: A Brief History of Tomorrow if we do not prepare and meticulously analyze our present situation and where it may lead. It also doesn’t help if we examine the role of some major companies in this dramatic battle for the future: recently, a Dutch organisation “PAX” came out with a report that concluded that leading tech companies are putting the world at risk of “killer AI”. Do we have some quislings in our midst?
Google’s previous motto Don’t be evil” has, to some, grown into a string of words that swim in the sea of irony . . . especially since it became known that users were tracked since 2012 across all its devices or its numerous censorship scandals. Luckily for Google, this is not about them, but their motto became the title of a report by the Dutch peace-building organisation, PAX, in its quest to see if 50 surveyed companies are working on technology that may be relevant to artificial intelligence in weapons, and whether they are working on related military projects. As we shall see later, Google has actually been a positive force in this area.
. . . but other companies have shown rather shocking characteristics. The report Don’t be evil? by PAX was published by an organization that strives to bring peace, as the name suggest (Pax = peace in Latin). The Dutch organization has, in the past published reports and analyzed those fields of the world related to “protecting civilians against acts of violence, ending armed violence and building a just peace” as their mission statement makes clear. Everything from the civilian situation in Sudan to the war in Syria – PAX has been there. The status of peace, however, does not only result from humans better understanding each other, but the understanding of the unknown field of lethal autonomous weapons, otherwise known as “killer robots” colloquially.
The report sought to analyze which companies from 12 countries could be potentially involved in the development of these weapons, be it purposefully or unknowingly. These companies were involved in either one or more of the following areas: big tech, hardware, AI software and system integration, pattern recognition, autonomous aerial systems or ground robots. A ranking was then based on three main criteria:
1. If the technology being developed could be used for killer AI.
2. Their involvement with military projects.
3. If they’ve committed to not being involved with military applications in the future.
Some companies were shown to be using “best practices” (of which there were 7 in total) when it came to the above criteria, 22 were of medium concern and 21 as high concern. You may or may not be surprised about the companies that were placed in these categories. Let’s start with the ones placed in the high concern category. These included Amazon, Microsoft and Intel. You may be less surprised when you find out that Amazon and Microsoft are both looking for a $10 billion Pentagon contract to build the cloud infrastructure for the US military.
Google, on the other hand, is among the seven companies that the survey found are not engaged in the development of AI for weapons, among the 50 companies. According to Pax, Google published its AI Principles in 2018, which state that Google will not design or deploy AI in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. As mentioned earlier, while the motto may be “Don’t be evil”, in this case Google has been let off the hook and implemented best practices. Other companies in the “concern group” include Palantir, a CIA-based company that has been awarded a $800 million contract to develop an AI system that can help soldiers analyze a real-time combat zone.
The use of artificial intelligence (AI) in weapons, allowing them to select and attack targets on their own, has triggered a fierce ethical debate in recent years. Critics warn that such weapons could jeopardize international security and imply a third weapons revolution in human history, following the invention of gunpowder and nuclear weapons. Yuval Harari\’s book Homo Deus: A Brief History of Tomorrow is definitely worth a read as it brings an interesting perspective on where we might be headed given current circumstances, and those circumstances seem to come with a sigh of frustration from many sides. The lead author of the published report, Frank Slijper, poses a frightening question: “Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?”.
. . . Is it worth denying the inevitable? It is rather concerning when a minority of companies were in the best practice category but will this report serve as a catalyst in taking this area more seriously or will it be brushed aside as these companies give lethal autonomous weapons more room to breathe? In future history books, will these companies be regarded as quislings or traitors to Homo Sapiens by knowingly or unknowingly serving the interests of the “killer robots”? Time will tell . . .