Nowadays, a vast number of books are discussing both the present and the future of artificial intelligence. Most of them repeat truisms and lack true original contributions, but some contain different and novel points of view or approaches. Among the latter is Life 3.0: Being Human in the Age of Artificial Intelligence by physicist, cosmologist and MIT professor, Max Tegmark.
What I liked most about Tegmark’s text is related to the disciplines he is an expert in. Starting with the atomic level and the most basic strata of matter, he explains how information and memory can acquire an entity of its own, regardless of whether its physical substrate is carbon or silicon and — as computing is a transformation from one state of memory to another — the same applies to intelligence and learning. The only limit to its development is the one established by the laws of physics.
On the opposite end, in its final chapters, the book addresses the impact of the advent of AI at a cosmological scale, both temporally and spatially, connecting computational capacity and black holes with amazing conclusions, reminding us of the faint limits between matter and energy.
However, what I refer to in this article is nothing of the above, but rather a series of reflections on the chapter that focuses on the near future, on the coming years and decades. The chapter starts with a quote by Irwin Corey:
“If we don’t change direction soon, we’ll end up where we’re going.”
Tegmark reviews the most recent advances in AI and the reasons for its application to common areas of the public sphere, such as justice, health, employment, safety and legislation. In addition, he raises quite a few questions that should be part of the political agendas of all governments and legislatures worldwide.
At the beginning of his reflection, he asks some very general questions:
How can we change our legal systems to be more fair and efficient and to keep pace with the rapidly changing digital landscape?
How can we increase our prosperity through automation without depriving people of their income or the purpose of their lives?
which gradually become clearer as he elaborates on different topics. For example, he comments on the dream of having an efficient, fast and impartial justice system that is optimized by means of technology:
Some experts dream of automating [justice] completely through robojudges: AI systems that constantly apply the same high legal standards to every judgment without succumbing to human errors such as bias, fatigue or lack of the latest knowledge.
Now, if these robojudges were biases, made mistakes or were hacked:
Would everyone feel that they understand the logical reasoning of AI enough to respect its decision? Machine learning is more effective than traditional algorithms, but at the cost of being inexplicable. If defendants wish to know why they were convicted, shouldn’t they have the right to a better answer than “we trained the system on lots of data and this is what it decided”?
However, a good legal system is not only based on the correct application of laws, content is essential as well. The development of our regulatory framework clearly needs to become increasingly agile given the pace set by technology and experts in that field should actively participate in the legislative function.
Should AI-based decision support systems for voters and legislators ensue, followed by outright robolegislators?
Justice and laws are only one of the areas discussed by Tegmark regarding the immediate future. He also writes about the tense relationship between privacy and freedom of information we already face today. Where should we draw the line between justice and privacy, between the protection of society and personal freedom? Is the combination of biometric identification enhanced by AI and cameras in public spaces positive or will it result in an Orwellian state? Should we feel harassed by this permanent situation or feel relieved because it may defend us if deepfake generalizes?
The author also discusses the possible granting of rights to machines. When an autonomous vehicle causes an accident, who should be responsible? Its passengers, the owner, the manufacturer, the ‘mobility as a service’ provider? Or the car itself? And if machines become subject to civil liability, shouldn’t we grant them some other legal rights or obligations as well?
The chapter on weapons, their regulation and their use by armies and security forces is especially disturbing. Should we allow countries to develop increasingly lethal AI-based weapons (complemented by other technologies, such as drones)? Should we regularize a not-so-distant future of autonomous weapons as we are currently regularizing autonomous vehicles? How can we make sure that they are always going to be used at the service of law and justice? Or, that they are not going to become defective? Some perspectives are rather unsettling: https://youtu.be/9CO6M2HsoIA
Of course, the chapter also focuses on the impact of technological disruption on employment and, as an immediate consequence, on inequality, a topic we already discussed on other occasions. It also mentions quickly the impact in other equally relevant areas, which results from the combination of AI with other technologies, such as IoT, Blockchain, 3D printing or biotechnology: on health systems and their sustainability and accessibility, on the role of developing countries in global production chains if they are not digital enablers, on the financial area, questioning the current paradigms of currency, financial markets and central banks, and on urban mobility, regarding both the transportation of people and goods by self-driving vehicles.
In general, the answers to questions on this topic are biased in two ways: first, because they propose to interrupt, delay or prohibit a technological development that is unstoppable (and under the appropriate conditions, desirable). Second, because they usually provide only black or white solutions. Either we ban biometric video surveillance in the streets or abstain from controlling it entirely.
As Tegmark states, it’s not an all-or-nothing decision, it’s rather about the degree to which we want to deploy AI in our lives and at what rate. Resuming the question of justice, the book raises the following questions:
Do we want human judges to have AI-based decision support systems, just like tomorrow’s medical doctors? Do we want to go further and have robojudge decisions that can be appealed to human judges, or do we want to go all the way and leave even the final sentence to machines?
Of course, the answer to these questions and to all they imply is far from simple. It requires a deep, calm, multidisciplinary and global reflection. In other words, it takes time. The bad news is there is not much time left and we are granting ourselves the luxury of squandering it, because technological development has not been slowing down its frantic progress, on the contrary, it accelerates it.
One could expect that the contributions of a physicist and cosmologist to the dilemmas brought about by AI would limit themselves to the independence of the intelligence of its material substrate or to the possibilities of its expansion through the Universe when it travels at speeds close to that of light.
Fortunately, Max Tegmark became aware of the importance of immediately addressing the issues that affect our most immediate future, i.e. those related to anticipation, and co-founded the Future of Life Institute, backed by Erik Brynjolfsson, Nick Bostrom, Elon Musk and Martin Rees, among others. The institute’s mission is to catalyze and support initiatives to safeguard life and develop optimistic visions of the future, including positive ways for humanity to lead its own course by taking into account new technologies and their challenges. His motto is somewhat more drastic:
Technology is giving life the potential to flourish like never before… or to self-destruct. Let’s make a difference!
Indeed, time is running out if we want to write the history of the future that technology holds. Actually, I don’t believe we have a choice: the impact will be such — it already has been such — that we cannot afford to navigate into the immediate future without a helm, at the mercy of the tide. Quoting Irwin Corey, “If we don’t change direction soon, we’ll end up where we’re going.” We agree that this is a risk that should not be run…