Artificial intelligence, more questions than certainties
May 17, 2022

Can intelligence be clearly defined? Can there be another way of producing intelligence that is not a copy of our own? Will machines be able to have consciousness? Does artificial intelligence really exist? These and many more questions were raised at the round table “Natural intelligence versus artificial intelligence”, held at the Faculty of Philosophy as part of the second edition of the Master’s program in Science and Philosophy: Building the Future. The co-director of the master’s degree, Ana Rioja Nieto, explains that this initiative was born as a necessity of our time, which is “to address the extraordinary development of science and technology, as well as the philosophical, legal, economic, ethical and social consequences of all these achievements”.

Rafael Caballero, professor at the Faculty of Computer Science at the UCM, recognizes that this is a field in which we are far from reaching certainties, especially because there are problems in defining concepts. “There are many definitions of what intelligence is, but until we are clear about this, we cannot say whether artificial intelligence exists or not,” says the professor.

Caballero is aware that intelligence has gradations, “and it is also clear that there are different types of intelligence, but it is not known if we are willing to extend these concepts to other non-human systems, for example, to animals”. Antonio Diéguez, Professor of Logic and Philosophy of Science at the University of Malaga, agrees that “there is no unanimous definition of intelligence. There are as many as seventy dictionaries of psychologists and computer scientists, although if one is not very strict there is a basic idea of intelligence that is assumed, such as problem solving; the achievement of objectives in an innovative, creative way and in different contexts; the ability to acquire knowledge and apply it in new contexts, which is something that machines still do not do, because they learn something, but they cannot apply it in different contexts. If we understand intelligence as achieving objectives in an innovative way, some machines could be there”.

But many animals would also be there, because when one speaks of animal intelligence, boutades are said, such as that no great ape has composed a symphony, but “most human beings have not done it either”. That cannot be a criterion of intelligence, because if it is “even our species would have existed practically always, until its first artistic representations, without intelligence”. For Diéguez what we have to do is not to close the concept of intelligence too much, because with it we get rid of many animals, but we learn nothing from it, and “the interesting thing is to see what we share with animals, because what we share with a great ape we can assume that it was in our common ancestor six million years ago”.

And the same could be done with artificial intelligence, “we can learn a lot about our intelligence, seeing how artificial intelligence is created, even if it is narrow”. Something with which Rafael Caballero fully agrees, who believes that the development of artificial intelligence is allowing us to reflect on the very concept of intelligence we have had until now, “it is a way of understanding ourselves better, based on that reflection that arises on the other side”.

A concept that dates back to the 17th century

The dean of the Faculty of Philosophy of the UCM, Juan Antonio Valor, renounces the search for a single definition of intelligence and also assures that the problem is that now we only understand intelligence from the point of view of artificial intelligence. With this, we are going to reduce natural intelligence to artificial intelligence because today we cannot think of it in any other way, “scientific-technological development has been so powerful that it is now impossible for us to think of intelligence without resorting to artificial intelligence”.

The dean links this idea to the theory of the double. According to this theory, we all have doubles, avatars, which we now generate continuously in the networks, which act according to logical structures and a certain software, and we want to be our double. In fact, “our society wants to be like the one that appears on Tinder, that double that in Norse mythology is finally the one that warns us of our death, and that is the issue of our days, that we want to act as our computer double and that makes the concept of natural intelligence die, which is no longer of interest”.

According to the dean, we are living in that project on which Europe embarked in the 17th century, regardless of the definition we give to intelligence, and the fact is that since that time “a desperate and passionate reduction of the physical world, of nature, to geometry has been attempted”. One of the first steps was taken by Descartes, who wanted to reduce the world no longer to geometric structures, but to arithmetic, going even further, because “he understands arithmetic as a relation between proportions and from there his project consists of constructing the world”.

Later, Leibniz, in 1714, began to argue with Newton and launched “an exciting project that involves reducing the world to logical structures”. Valor asserts that there is a very powerful ontology that involves understanding that things are accumulations of relations, and understanding that relations come first and then things. This gave rise to a heated polemic with Newton, since he questioned how solidity can be reduced to logical relations, and there he spoke of atoms, units of solid and impenetrable matter that cannot be further divided, against which Leibniz postulated the theory of monads, which would be those units that could be reduced, but also “monadology goes further, because it is not only reducing the world, but also the soul and intelligence to those logical structures that compose everything”.

Leibniz’s project was not continued in the 18th and part of the 19th century, because modernity passed through Newton, leaving Aristotle behind, but it began to recover at the end of the 19th and beginning of the 20th century with the emergence of logical positivism. From there, according to the dean, he explains the origin of computation and the whole project that involves reducing intelligence to artificial intelligence, understanding it from logical relationships. This logic would allow us to generate certain software, which would allow us to simulate the activity or functioning of natural intelligence.

The dean, who openly declares himself in favor of artificial intelligence in a strong sense, feels he is heir to that tradition, he is “incardinated or imprisoned or abducted by it”. It would be a matter of taking this intellectual project to its ultimate consequences, in such a way that this software would allow us to generate not an image of a friend, but to replicate his smell, his way of speaking, his color and his solidity. If this total replication were achieved, there would be two equal entities that would join together to form a single entity.

The evolution of the brain

Liset Menéndez de la Prida, director of the Laboratory of Neuronal Circuits of the Cajal Institute, CSIC, put the brain at the center of the debate, the organ that provides intelligence and also allows us to consider whether other forms of intelligence are possible.

The researcher explained that forms of cognition have been discovered in unicellular organisms, such as a mold capable of finding solutions to relatively complex spatial problems depending on the contexts, and this without having the ability to reason. According to her, there is also another intelligence as a whole, which is not determined by a single individual, such as the one that can exist in an anthill.

Menéndez de la Prida reports that the properties that emerge from this interaction have given rise to artificial intelligence algorithms, so “we are beginning to transfer this conceptualization and this takes us back to the brain, to natural intelligence, because what we are, what we are capable of doing, how we change the world or how we understand it, how we have come from the caves to here is the result of the evolution of our brain, which has allowed us to create and modify the environment”.

For Antonio Diéguez, natural intelligence is the product of an evolutionary history, in which the increase in intelligence has been accompanied by an increase in self-awareness, both in humans and in other animals. He wonders himself whether this means that self-awareness is an emergent property and that when we have sufficiently intelligent artificial systems they will also be self-aware. There are no answers, although there are opinions to suit all tastes, although “it seems that there may be intelligent machines, without any capacity for self-awareness”.

So far, and for the time being, in the natural world the evolution of intelligence has been astonishing, both in birds and mammals and in octopuses. “In artificial intelligence, can we say that we are making progress toward something like this? There are opinions of all kinds, including those that say that we have not yet taken even a first step towards such artificial intelligence. Will we be able to have general artificial intelligence? It also depends, some say no, others say in a hundred years and others say in 2049”.

Menéndez de la Prida adds that artificial networks inspired by the brain have evolved a lot, in such a way that a deep network has a first layer that receives inputs from the environment from the outside, it is connected with layers that are together, of which you can put more or less, depending on the depth you want, and finally there is an output layer. With these models you can solve surprising problems, such as recognizing patterns on a learned basis. Today they can learn, infer and generalize many things, although there are problems such as the biases of the data that we put into them.

He also reports that beyond mere data entry, Google Minds, through its AlphaGo program, has already managed to beat the human in the game of go, including self-play. The network initially learned from the combinatorics of the experts, followed by phases in which it trained itself with its own combinations, and this is precisely the mechanism used by the brain to generate memory in the neurons of the hippocampus, which we must not forget “is the part of the brain that is dedicated to creating memories, with which we generate a narrative and a representation, resulting in complexity”.

Deaf artificial intelligence

Dean Juan Antonio Valor points out that there are other ways of understanding intelligence, as Aristotle, Heidegger and a contemporary author such as Byung-Chul Han did. For them, intelligence is an affection of the world, not a creation of the world, therefore, it does not pass through activity but through receptivity, it is an affective medium and, “as Heidegger says, it is first of all, a disposition of the soul, a stimulus. Intelligence is, in short, having the sensitivity to notice differences. Chul Han says that this is why intelligence hears the voice of the world, while artificial intelligence is deaf, because it does not allow itself to be affected by that world”.

Chul Han also says that artificial intelligence handles data in an addictive way, putting them side by side, while intelligence is characterized by generating totalizing units, so that the whole is much more than the sum of the data. These philosophers also emphasize that artificial intelligence lacks negativity, lacks rupture, and is incapable of generating the absolutely new from the same data.

Rafael Caballero stresses this idea, believing that artificial intelligence can exist, but “right now it does not exist, and the systems we are using, which are super-powerful, are based on automatic learning techniques, which take input data and relate them to output data”. Therefore, current systems cannot learn for the next problem without that input data, and “that takes us a long way away from a system capable of solving new problems, which is what one of the definitions of intelligence could be”.

Antonio Diéguez also does not know if we will ever have this full artificial intelligence, but “if they are serious about achieving it, everything must be very controlled, because as some specialists have already pointed out, when we have such an intelligence it will be uncontrollable, since it will be capable of building other artificial intelligences better than itself and so on continuously, in what will be an exponential growth of artificial superintelligence”. She recognizes, however, that “it is an extremely improbable scenario, especially because it is not profitable for any company”.

Liset Menéndez de la Prida concludes that there is a responsibility of our species as to where we want to advance, not only because of the threat that a general artificial intelligence may pose, but also because of everything related to the new, in how it affects and threatens us, in terms of labor, judicial, responsibilities… Any advance means opening new questions, although she does not perceive that this is a very different threat to the one that may exist when the Arctic thaws and microorganisms start to emerge and climate change is consumed. She thinks that, in the end, everything depends on our responsibility if we want to have a future without annihilating ourselves for not having agreed.