published on 28 February 2018 in energy
Artificial intelligence: how can it be governed?
From ‘Blade Runner’ to ‘Matrix’, from ‘WALL-E’ to ‘I, Robot’, artificial intelligence, whether as androids, algorithms, software or cyborgs, has been part of our collective imagination for several decades.
Already in the 1940s, Isaac Asimov, famous above all for his sci-fi books, set down the three, still very relevant, laws of robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Since then, techno-scientific innovation has made huge progress, leading to two facts that are fundamental for the development of AI:
- calculation power has grown exponentially;
- we have a large quantity of available data.
Data and deep learning
These days everyone talks about it: we have entered the era of big data, in which production, sharing and processing of information on our health, our habits and our lives, are increasingly more rapid. Each month, around 30 billion posts are shared on Fb. Every minute, 48 hours of new videos are uploaded on YouTube. And while decoding the human genome once took 10 years, now it takes just one week. But what do big data and calculating power have to do with artificial intelligence? Let’s start from a definition that is essential for understanding the world of AI, that of deep learning. A deep learning system is a neural network, made up of many layers. In other words, it is a connection of many units, that link up and communicate with one another, simulating the functioning of the human brain. Just as occurs in nature, these “artificial neurons” receive an incoming signal and transmit an outgoing signal, thus propagating the message through the various layers of the network. The system is not programmed and learns by trial and error. In order to learn, therefore, the system needs many – very many – examples and with the immense volume of data that we produce, this is now possible. Artificial intelligence, obviously, is not all identical and is divided into two basic approaches: model-based and model-free. To understand what we are talking about (you can read this article in Il Tascabile), we can consider how the human mind works when predicting phenomena and making choices, that is it normally relies on models to interpret reality. This is different from memorising all the information that we collect and analysing the probability that something will happen, which is the mechanism on which deep learning systems are based.
Technological progress and research in the sector are leading to ever more sophisticated systems, the confirmation of which is possible in everyday life too: an example is the level of precision that automatic translators available on the web are achieving. Artificial intelligence, however, also arouses interest because of a number of stories reported in newspapers. One of the first news items that made a deep mark on the collective imagination concerned, in the mid-1990s, a computer perfected by IBM – Deep Blue – which succeeded in winning a chess game, for the first time in history beating the reigning human world champion. After being beaten the first time in 1996, the computer was updated and, in 1997, succeeded in beating Garry Kasparov. A story of a challenge by a computer against a human in a game made the news again more recently in 2016, the year in which AlphaGo beat the Korean champion in Go, a very complex Chinese board game.
Government of artificial intelligent
There are very many examples that could be made: a case that certainly raised a great deal of interest is also that of the Google Brain Team, which promoted the development of rational systems able to draft strategies, without supervision, for keeping our communications encrypted.
In the legal sphere, the case of Compas, specially-developed software used by US judges to predict a defendant’s risk of committing another crime, gave rise to much discussion because it proved to be discriminatory against coloured defendants. There is also much discussion about the systems that will become operative with driverless cars. What decisions must they make in case of risk? MIT has created a platform dedicated precisely to this:
The examples may be many, but what is important to understand, without excessive pessimism, but certainly with commitment and determination, is that artificial intelligence must be governed. Otherwise, we will run the risk of being governed by it.
So what should we do? What are the values, the places and who are the people who should have a say on it? According to experts in innovation policies and to the numerous experiments that have been started all over the world, a necessary step is to start a debate involving the largest number of participants, that includes the various players in society, guaranteeing the inclusion of as many points of view as possible. In the United Kingdom, in the United States, in Canada and in France, for example, several experiments in public consultation on the governance of artificial intelligence and of interdisciplinary study of this issue have been launched. Not only experts in ethics and governance are advocating so-called public engagement in innovation management, in this case management of artificial intelligence. Some researchers and entrepreneurs are also speaking up: for example one of the founders of DeepMind, Mustafa Suleyman, in recent article published in “Wired UK” has expressly taken this stance: “The study of the ethics, safety and societal impact of AI is going to become one of the most pressing areas of enquiry over the coming year” and it is also necessary to create “new mechanisms for decision-making and voicing that include the public directly.” In France, Cédric Villani, an internationally famous mathematician is in the front line in favour of public consultation on artificial intelligence.
It seems that what is needed to govern artificial intelligence is first and foremost our collective intelligence.
by Anna Pellizzone
For more details: