Artifical Intelligece and Privacy

Main Content

Should algorithms enter the private sphere for public safety?

Developments in Artificial Intelligence have led to a series of techniques and tools that can be used to prevent and combat phenomena such as homicides, feminicides, and public crime in general. AI possesses multiple resources, one of the most common being the use of information analysis algorithms, specialized programs that analyze enormous amounts of data from various sources, such as surveillance camera recordings, police reports, and even social media analysis, to identify patterns and trends. Let's examine how such procedures can be put into practice and how they can help law enforcement identify potential crime hotspots and adopt preventative measures.
From a statistical analysis point of view, in America, software developed by the Los Angeles police (called PredPol, meaning Predictive Policing) utilizes crime data from recent weeks to identify high-risk areas where law enforcement can concentrate their resources and take preventative measures. This system has been effective in reducing violent crimes by 33% in some areas of the city.
Another use for protecting public safety is image analysis. This system can be used to analyze data collected from surveillance cameras or cameras mounted on drones to identify suspicious individuals or risky situations. Moreover, such analyses can also be used to identify missing persons or wanted criminals, allowing facial recognition algorithms to identify wanted criminals who appear in a photograph or video. However, it is important to note that facial recognition technology is still in its development stage, and some problems have yet to be resolved, such as the algorithm mistakenly identifying an innocent person as a criminal based on their physical characteristics or other factors.
Other AI systems that can be used to prevent crime include monitoring social media to identify suspicious activity, analyzing conversations to identify words and phrases related to criminal activity, and analyzing demographic data to identify individuals who are most at risk of being a victim of a crime.
There are also technologies like electronic bracelets or subcutaneous chips: both of which can provide services similar to those of facial recognition systems in terms of security and individual monitoring. Electronic bracelets have been used in many countries as a control and monitoring solution for detainees, as a way of monitoring parolees, as a tool for tracking patients in some hospitals, or in some cases for monitoring people with mental illness. Subcutaneous chips, on the other hand, are tiny implants placed under the skin that can be used for various purposes, such as identifying pets or monitoring employees' activities within companies. From a citizen protection point of view, electronic bracelets are often used under judicial orders as a tool to prevent domestic violence or stalking, particularly against women. In fact, in some jurisdictions, courts can issue restraining orders that oblige these people to wear them and alert authorities in case of tampering or violation of measures established by the judge. This can provide victims with a greater sense of security and authorities with more preventative tools.
However, it is not all sunshine and roses: even in the use of these technological preventative tools, privacy issues have been raised (to balance public safety and respect for personal rights), and there are still many debates about whether anything beyond the statistics of "what has happened" (leading to the acquisition of facial recognition images or data in social media) could enter people's private lives and potentially be misused against them.
Therefore, let's take a look at legal requirements (both from individual states and from unions of them) for more detailed regulations on the use of such technologies.
In Europe, the facial recognition system has its general data protection regulation (GDPR) from the European Union, in particular:
- It introduced the need to obtain explicit consent from the individual for the processing of personal data, including facial recognition data.
- In 2019, the European Parliament adopted a resolution on facial recognition technology in the European Union, calling on the European Commission to develop a common approach to ensure the protection of fundamental rights and privacy.
- In 2020, the European Data Protection Authority (EDPB) published guidelines on facial recognition, calling attention to the importance of respecting people's fundamental rights and providing guidance on how to implement the technology in compliance with privacy regulations.
Furthermore, some EU member states have introduced restrictions on the use of facial recognition, such as a ban on using the technology for mass surveillance purposes in public spaces.
In the United States, there are several laws that regulate the use of technology in crime prevention and law enforcement. The Fourth Amendment of the United States Constitution protects citizens from unfair search and seizure, i.e. unfair searches and seizures by law enforcement. Therefore, the use of technology in surveillance and data analysis must be balanced with people's constitutional rights.
In addition to Europe and the United States, many other countries are facing this issue, and there are also international organizations trying to coordinate efforts to establish common regulations for using AI in crime prevention and combat, in the hope of preventing and avoiding deaths of people caused by others, as has unfortunately always happened in human history.