in the media

Should We Fear Artificial Intelligence?

It is necessary to be open-eyed and clear-headed about the practical benefits and risks associated with the increasing prevalence of artificial intelligence.

published by
Livemint
 on August 7, 2017

Source: Livemint

A recent, relatively minor, spat between Mark Zuckerberg and Elon Musk erupted online over the dangers of Artificial Intelligence. To briefly recap, in a Facebook Live session a couple of weeks back, Zuckerberg railed against people who talk about Artificial Intelligence-related “doomsday scenarios”, clearly hinting at fellow Silicon Valley leader Musk. Musk replied by stating that Zuckerberg’s “understanding of the subject is pretty limited”.

While the exchange itself did not move beyond this, Zuckerberg and Musk personify broadly the two sides of an ongoing debate on the dangers of Artificial Intelligence, ironically brought back into popular consciousness by recent (mostly incorrect) reports that Facebook shut down an Artificial Intelligence programme after it invented its own language.

But what is the key takeaway of the debate for policymakers and non-billionaires? Should one fear Artificial Intelligence?

As with most things, the answer is both yes and no. Beginning with why one must not “fear” Artificial Intelligence, such systems are actually pretty dumb. The much vaunted AlphaGo, for instance, would find it impossible to pick out a cat from a data set of animal pictures, unless it was reprogrammed completely and made to forget how to play Go.

This is because even the most intelligent systems today have artificial specific intelligence, which means they can perform one task better than any human can, but only that one task. Any task that it is not specifically programmed for, howsoever simple it may seem to us, such a system would find impossible to undertake.

This is also not the sort of Artificial Intelligence Musk is talking about. His warnings pertain to a type known as artificial general intelligence, which is a system that has human-level intelligence, i.e., it can do multiple tasks as easily as a human can and can engage in a “thought” process that closely resembles humans. Such artificial general intelligence, however, has so far remained theoretical, and is possibly decades away from being developed in any concrete manner, if at all. Therefore, any fear of a super-intelligent system that can turn on humans in the near future is quite baseless.

This, however, does not mean that there is nothing to fear when it comes to Artificial Intelligence. There are three broad areas where one should fear the effects and consequences, if not the technology itself.

First, and most importantly, jobs. While the possible negative effect of Artificial Intelligence on jobs has been a trending topic recently, there has been no academic or policy consensus on what the exact effect will be. A May 2017 study by Lawrence Mishel of the Economic Policy Institute, for example, argues that in the past, automation did not have any negative effect on the job market, but actually increased the number of available jobs.

However, this study has also come under some valid criticism, not least because it does not account for differences in the nature of automation between the period of its study and now. There can be no doubt that at least some jobs will be negatively affected by Artificial Intelligence, but the nature of these jobs and the nature of the jobs that may replace them, if at all, is hazy at best. It is this lack of clarity that one must be wary of.

Second, the use of Artificial Intelligence in weapons leading to ‘autonomous weapons’ raises a number of difficult questions in international law. Whether a machine that has been given the ability to make life and death decisions on the battlefield can adequately account for subjective principles of war such as proportionality and precaution is an issue that has been consistently taken up by civil society groups over the past few years. The underlying issue here is not that weaponized Artificial Intelligence would be smart, but that it would not be smart enough. The consequences of this have been deemed serious enough for the UN to begin deliberating on this issue in an official group of governmental experts forum this November.

Third, privacy and data security. It must be remembered that the entire Artificial Intelligence ecosystem is built on the availability of great amounts of data and enhancing efficiency requires continued availability of such data. Constant inputs and feedback loops are required to make Artificial Intelligence more intelligent.

This raises the question of where the required data comes from, and who owns and controls it. Facebook, Google, Amazon and others depend on the immense data generated by their users every day, and while the availability of this data may lead to better Artificial Intelligence, it also allows these companies, or anybody else who has access to the data, to piece together a very detailed picture of individual users, something which users themselves may not have knowingly consented to. The possible authoritarian implications of this, ranging from indiscriminate surveillance to predictive policing, can be seen in the recent plan released by China’s state council to make China an Artificial Intelligence superpower by 2030.

It is necessary to be open-eyed and clear-headed about the practical benefits and risks associated with the increasing prevalence of Artificial Intelligence. It is not going to go “rogue” and turn on humans (at least in the near future), and talk of such a theoretical existential risk must not blind policymakers, analysts, and academics to the very real issues raised by Artificial Intelligence.

This article was originally published in Livemint.

Carnegie India does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie India, its staff, or its trustees.