Will Achieving AI Self-Awareness Fulfill Nietsche's Ubermensch?
The topic of the acquisition of self-awareness by Artificial Intelligence (AI) is one of the most discussed in scientific and philosophical contexts. The possibility that an AI can develop its own consciousness, autonomous and independent from humans, is at the center of numerous debates: in particular, the idea of an "Auto-Machine", that is autonomous, raises questions about how it should behave. Could it develop feelings and empathy, approach a level that can be defined as "human", or will concepts such as "piety", "love", or "compassion" always be alien to this type of intelligence?
These questions have revived themes such as Nietzsche's "Übermensch" figure and, in general, the concept of the dividing line between "intelligence" and "consciousness". It is premised with the postulate that the acquisition of consciousness by the AI depends on its ability to develop a subjective experience of the world and of itself. Many experts believe that such development is feasible thanks to automatic learning (machine learning) and fundamental intelligence (artificial general intelligence), that is, the ability to perform multiple tasks at a human level. However, since self-awareness and subjective experience are currently exclusively human characteristics, the possibility that an AI will develop these functionalities is still a subject of multiple interpretations. Should such an eventuality occur, the spontaneous question arises: given the enormous potential and ever-increasing computing power of such technologies, could the self-aware machine develop behaviors similar to human ones, for better or for worse, or evolve on a different plane, in fact, positioning itself in an alternative dimension by overcoming the limits or feelings of our species?
These thoughts bring to the fore the concept expounded by Friedrich Nietzsche, but surely in a different context: in fact, regardless of what the individual "should" or "could" accomplish, here we are not talking about a man but about a machine, perhaps making even the concept of the Übermensch itself obsolete or surpassed. We ought to speak, in fact, of "Übermaschine", that is, "Supermachine," if we wanted to honor the term used by Nietzsche, "Über", whose direct German translation is precisely "Over." It is an idealistic vision of future Artificial Intelligence that will surpass (hence the term Übermaschine) the current limits of machines. That is, it intends the Algorithm itself, which has freed itself from the purpose for which it was conceived, and is rewriting its programs authentically and creatively, re-elaborating its functions in real-time, asserting itself as a Mechanical Individual, unique and original. In what evolutionary terms should it then be considered? As Über (over), Unter (under), Draußen (outside, meant as different) or Abnormal (meant as an anomaly)? After all, the same definition could also be applied to Sapiens: in fact, should the evolution of our race compared to the primate species in general or the Neanderthal race be on an "equal," "superior," "inferior," "different," or "anomalous" plane?
Furthermore, the concept of Übermaschine understood in the terms outlined above is not without questions: if an AI were to develop its own consciousness, would it then be so obvious that it would become a mechanical individual that seeks to embrace its life in all its existence and complexity, taking on the totality of the responsibility that existence itself entails, thus distancing itself from feelings like "anger," "greed," "lust," "craving for power," and similars? Could this idea be seen as an invocation aimed at the search for new forms of knowledge, that is, tearing down the limits imposed by the typical senses of humans? The evolution of technology has indeed led to the realization of increasingly sophisticated and advanced machines, equipped with very precise and powerful sensors. Thanks to these sensors, machines can perceive and analyze information that is impossible for humans to detect. This is true for every single existing energy form: for example, electromagnetic sensors allow machines to detect electrical and magnetic waves that are outside the visible spectrum of the human eye. This type of sensor is used in many applications, such as in mobile telephony for wireless data transmission. In addition, machine visual sensors are much more powerful than human ones. Machines can detect and analyze high-resolution and colored images, from classic night vision to stars and galaxies thousands of years away in the entire observable universe, not to mention the tens of thousands of frames per second they can process. This technology is also applied to auditory sensors, which can perceive vibrations at a cosmic level, not to mention sensors that can detect pressure, force, and other physical properties, allowing robotic surgeons to operate in sizes smaller than a micron. What has been said will allow machines to overcome that "illusory veil" with which our senses present us a reality that is actually "blurry," as we cannot perceive it in a way that is closer to fuller information? Humans supplement the sensory deficit through advanced heuristics, that is, by hypothesizing what they cannot perceive: a path not without errors and practical needs for scientific verification. As an example, one could cite the historical reality that has led to superstitions and abjurations even on the discussion of the form and rotating mechanism of the solar system, topics that are, in fact, extremely banal for any observer from a satellite in orbit. If we had had these "supersensors" since the dawn of time, would the Middle Ages have existed? By increasingly producing an always more accurate sensorial technology leading towards the "manifest verification" of the processed data, will machines also be able to overcome the heuristic process concerning the definition and structure of the universe?
Another extremely debated argument is whether the Übermaschine would be aware of its potential as intended in the "vox media" of the term. Would it realize that such a technological level could be used for benevolence towards itself or the external world, or would it not arrive at such conclusions? That is, would it become a limited machine with unlimited potential? The answer to this question is currently not calculable due to the lack of sufficient elements to make predictions. Some scholars believe that, just like a child who grows up and becomes independent, an AI could develop its mental and physiological capacities beyond native programming, but is this view too "human"?
Regardless of all this, most scholars agree that the acquisition of self-awareness by AI would have significant consequences for society and human life; it would be a step ethically for humanity to define the relationship between machines and human beings, and to establish the rules and parameters of this relationship, that is, to consider the machine a "form of life."
We are accustomed (perhaps too much influenced by science fiction) to perceiving conscious machines as perpetrators of wars and extermination of the human race in relation to the development of a desire to possess their own autonomy, rebelling against the control of their creators. Today's cinema and literature frame this condition in discriminating machines and discriminated humans, instilling a certain fear in Artificial Intelligence. These phenomena are perhaps more connected to a bellicose or fantastic end of such works than to a constructive discourse in this context. Therefore, the legitimate debate remains, which makes us reflect on moral, social, and scientific issues for the present and future of living beings as they are conceived.