EU Moves Toward Banning Algorithmic Policing and Sentencing 

    In the United States, many top criminal-legal system operatives and thought leaders worship at the shrine of the algorithm. This devotion is yet another way in which the humanity of the people caught in the system is ignored. It’s especially true of more conservative states, where leaders jumped on the algorithm bandwagon early, then rushed to declare that “data-oriented” means the same as just or fair.

    Algorithms that include factors such as prior criminal records are used both in sentencing and in bail determinations—amplifying racial and other biases that already exist in the entered data because of police and prosecutorial discretion. Algorithms are even used to determine which communities face heavier policing.

    Europe, which already has far lower incarceration rates than the US, differs in this regard, too.

    On May 11, two committees of the European Parliament, the lower house of the European Union’s legislature, voted 84-7 (12 abstentions) to adopt a “draft negotiating mandate” on the EU’s pending AI Act. This amendment would ban “intrusive and discriminatory uses of AI systems such as … predictive policing systems (based on profiling, location or past criminal behavior).”

    The amendment, if it becomes EU law, effectively puts algorithmic policing, prosecution and sentencing off-limits.

    In this context, “predictive policing” covers both policing and other criminal justice functions, and refers to a wide categories of information, from racial demography to “personality traits and characteristics.” The latter is important, because some criminal-legal systems, as in the United States, use evidence of certain mental health disorders, especially personality disorders, as justification for harsher punishments, including the death penalty.

    The amendment, if it becomes EU law, effectively puts algorithmic policing, prosecution and sentencing off-limits—in the same pile as biometric systems that use things like race, gender, and religion to make decisions about peoples’ rights.

    The EU has not yet outright banned algorithmic sentencing, for example. But the amendment makes clear that the “placing on the market, putting into service or use of an AI system for making risk assessments of natural persons or groups thereof in order to assess the risk … for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal or administrative offense based on profiling … or on assessing personality traits and characteristics, including the person’s location, or past criminal behavior” would be prohibited, as per the new Article 5 of the AI Act.

    The two committees in question, the Internal Market Committee (IMCO) and Civil Liberties Committee (LIBE), only contain 103 Members of European Parliament (MEPs), out of 705 total. But they seem to be representative of the parliament’s overall political balance. That bodes well for the amendment’s chances of eventual ratification. It also means the vote cannot be characterized as coming only from the hard left, as US algorithm supporters would doubtless like to be able to claim.

    Out of 45 full members of IMCO, 12 are aligned with the center-right European People’s Party (PPE, the largest group in the parliament as a whole); four with the right-wing European Conservatives and Reformists (ECR); and five with the far-right Identity & Democracy (ID) party. Seven represent the centrist party Renew. As for LIBE, out of 69 full members, 18 are aligned with PPE, 10 with Renew, six with ECR and seven with ID.

    Perhaps when dehumanization is the norm, as in the most incarcerated nation on Earth, newer dehumanizing tactics are pre-normalized.

    Why, then, are US lawmakers and officials so much more comfortable with the robot-ification of criminal justice and all its consequences than their European counterparts?

    Perhaps because when dehumanization is the norm, as in the most incarcerated nation on Earth, newer dehumanizing tactics are pre-normalized. And perhaps because in this context, not enough Americans, despite the efforts of an activist minority, seem to care about the “justice” part of criminal justice.

    Not all of the organizations that should care about it seem to care enough, either. Witness the recent choice by major philanthropy fund Arnold Ventures to appoint Jennifer Doleac as head of its criminal justice division. Doleac is a major believer in the idea that decision-makers in the justice system should be fed more algorithmic tools, despite the fact that algorithmic justice can effectively target vulnerable defendant groups. Yet, perhaps this should not surprise us. Despite Arnold Ventures’ association with reforms, it has been perhaps the organization most responsible for tying bail reform initiatives to risk assessment algorithms throughout the US.

    Matters on this side of the Atlantic are set to get worse. Cities including Rialto, California are now voting to centralize both private and public surveillance camera footage, then hand all footage to the police. That will criminalize yet more people, especially those who have no option but to live in public spaces, neatly sidestepping the agency of those members of the public who might choose not to report people to the police.

    For now, a very different approach in the EU is something US reformers will be able to point to more with hope than expectation.

     


     

    Photograph by Pietro Jeng via Unsplash

    • Rory is the founding attorney of Fleming Law LLC, an immigration law boutique in Philadelphia. He has worked for a variety of criminal justice and harm reduction nonprofits, including Law Enforcement Action Partnership and Harvard Law School’s Fair Punishment Project, and provided campaign services for over a dozen district attorney campaigns. His articles have appeared in the Atlantic, Slate and many other outlets.

    • Show Comments

    You May Also Like