Machina Delinquere Non Potest

Main Content

Can an Artificial Intelligence be legally responsible for its actions?

Before spreading of Artificial Intelligence systems, there was an unwritten principle, valid in every legal system, according to which an "inanimate" subject cannot be held responsible for a crime. This phrase has become more relevant than ever in the era of the practical implementation of the hopes and worldviews laid out by the renowned mathematician Alan Turing, which led to the birth of electronic computers as we know them today and their evolution into Artificial Intelligence. Therefore, the legitimate question arises: Is this phrase still valid in today's world of Artificial Intelligence?
Starting from the etymology of the phrase, the term "machina" comes from Latin and means machine, mechanism, or instrument. The proverb is therefore a kind of moral affirmation that asserts the inability of machines to commit crimes and is based on the conviction that only human beings can be held responsible for their actions because they possess free will and the ability to intend and desire, elements that machines lack.
This interpretation of the law remains valid to this day and does not seem to lend itself to many exceptions. In fact, even in the modern era, where technology has become an increasingly present component of our daily life, machines are considered incapable of committing crimes. By their nature, a machine cannot consciously decide to carry out an illicit action, nor do they possess the mental state necessary to commit a crime, such as the evaluation of "fault" or "intent."
However, when entering the space of modern technological law, due to the increasing use of machines and algorithms in the most disparate situations, it is possible (from a certain point of view) to frame the machine's actions as an implicit "co-author" of some criminal activities. Let's consider the example of an accident where a self-driving car hits a pedestrian: even though the vehicle is not, in and of itself, guilty, authorities could require the car manufacturer to respond to a possible system failure or tampering (culpable or otherwise), which could initiate a real criminal trial. Other examples can be virtual assistants who successfully help a person carry out a series of criminal actions: such "assistants" could therefore be used as evidence in a criminal trial. In these cases, the machine becomes a kind of indirect secondary actor of illicit actions and could therefore be deemed somehow responsible.
Furthermore, let's consider the situation in which some countries are developing autonomous weapons with the power to decide and act without human intervention. This means that these machines could become deadly weapons that could be used to kill humans in conflict situations. In light of this, the machine itself assumes a different meaning, as it is armed and autonomous, able to cause damage and kill based on the algorithms it executes. This raises questions about whether machines can be considered "responsible" for their actions.
Another problem is to consider machines as representatives of human interests, and as it appears evident, this position could entail numerous complexities. However, the responsibility and application of war machines created by humans are yet to be interpreted. For example, in the case of damage or death caused by an autonomous weapon, who would be held responsible: the machine manufacturer, the owner of the machine, the machine's programmer or (presuming an error in the software) the machine itself? Obviously, an autonomous machine does not possess its moral conscience and, therefore, could not evaluate the strength of its behavior with its potential negative effects. This is precisely why humanists and sociologists have questioned whether the creation of such machines is ethically justified or whether it is right to allow the use of this type of weapons in conflict situations (regardless of the ethical dilemmas of the conflict itself). In addition to this, there would be implications regarding compliance with international standards, laws of war, international human rights laws, and redress for damage caused by their use.