Are there people or functions at the heart of algorithms?
The use of algorithms is becoming increasingly widespread in multiple sectors, from financial services to politics, passing through immigration and tax evasion. These are tools that promise to make decision-making processes more efficient thanks to the speed and precision of the immense amount of data processed in just a few minutes or hours, which would otherwise require months or years of human study.
However, the issue is not as simple as it seems. Although algorithms can be advantageous in many cases, they can also generalize and exclude exceptions that do not fit into the categories managed by the algorithm, but are still analyzed and processed without being classified as such. This error often occurs in financial services, where customers are evaluated based on a series of predetermined factors such as income, age, education, etc. But what if the customer is an entrepreneur who has just experienced a bankruptcy? His income might be low because he has suffered a momentary economic loss, but he still has the ability to recover quickly because he has experience in the field. In this case, the algorithm could exclude him a priori as a possible loan applicant.
The problem becomes even more serious when the use of algorithms involves politics or immigration:
In the first case, algorithms could be used to conduct targeted advertising campaigns based on the political position of citizens, without considering the fact that many citizens are not actually completely satisfied with the political ideology they have chosen to support.
In the second case, algorithms often tend to "reason" in generic terms, considering all immigrants as if they were a homogenous category. This can lead to excessive simplifications and misunderstandings, creating stereotypes and prejudices against people who should be considered individually and concretely.
For these reasons, human intervention continues to be necessary in many cases. Supervision can help refine the algorithm, from correcting any mistakes to managing exceptions that AI may not be able to detect on its own.
Furthermore, AI learning can be improved through a continuous reprocessing approach that takes into account past experiences and modifications made by the human operator, including exceptions or anomalies in previously processed statistics, i.e. taking into account both global and particular information. Using comparative algorithms can, therefore, be a very useful technology, but collaboration with the human factor remains essential to ensure accurate analysis and manage the exceptions that will inevitably arise.