Why global collaboration is key for the future development of AI
“The three greatest challenges for humanity are nuclear war, climate change and technological disruption, which can only be tackled with cooperation.”
This quote comes from a recent Yuval Noah Harari interview for a Spanish newspaper, and it definitely made me go back to one of the questions that has been in my mind over the last few months.
Are the current individual efforts around AI ethical development worth it? Before answering this question, let me start by summarizing (non-exhaustively) some of the most relevant efforts put in place recently:
Europe has been very active during the last months regarding AI-related initiatives. Three of the most relevant ones and its goals are the ones below:
In the USA, we highlight different efforts coming from the administration and academia, and the ones coming from digital giants. We can see a summary of the initiatives below (again, as a non-exhaustive exercise):
In parallel, China has launched its own plan to use Artificial Intelligence to gain global economic dominance by 2030. In the case of China, it is worth mentioning that:
· More than 50% of global AI startups funding already comes from China.
· Out of the 27 teams competing in 2017’s ImageNet challenge, more than half were Chinese-based research teams from universities or companies, and all top performers were from China.
For a complete overview of national strategies around AI, please visit this great article. A quick read will make you understand the rapidly increasing number of different strategies around AI.
Although the described efforts are a significant effort towards establishing ethical and legal frameworks to develop AI, the inner nature of Artificial Intelligence (as I already covered in this article) can be useless in making progress in its ethical issues. To develop this idea, let’s build on the list of Top 9 ethical issues in artificial intelligence by the World Economic Forum. Among the nine issues raised by the WE Forum, let’s focus on the following:
1. Unemployment and Inequality. What happens after the end of jobs?
2. Security, evil genies and Singularity. How do we keep AI safe?
The above issues require a common global strategy. Let’s think about it for a moment:
· Although one country or continent successfully manages the transition to job automation, it will still have to deal with the effects of its neighbors not doing so (e.g. creating new migration pressures).
· AI security threats cannot be managed locally. Cybersecurity attacks, evil uses of AI and Singularity all depend on a global consensus on how to develop and control the limits of AI.
At this point, it seems like a good time to go back to Yuval Noah Harari. As he mentions in one of his books:
“Sapiens rule the world, because we are the only animal that can cooperate flexibly in large numbers. We can create mass cooperation networks, in which thousands and millions of complete strangers work together towards common goals.”
Now is a good time to keep that affirmation in force to deal with the challenges of Artificial Intelligence.