
In 2001, the UK introduced an early AI program called Oasys to help them develop a pre-crime strategy against former inmates. Since then, the program has expanded, but with little accountability, causing some to question the legality of the system that seems to pre-emptively violate the rights of citizens based on the presumption they’re about to commit a crime. The program has gained much interest from American authorities. Similar programs are sold to American law enforcement today.
From jstor.org: With the dawn of artificial intelligence (AI), a slew of new machine learning tools promise to help protect us—quickly and precisely tracking those who may commit a crime before it happens—through data. Past information about crime can be used as material for machine learning algorithms to make predictions about future crimes, and police departments are allocating resources towards prevention based on these predictions. The tools themselves, however, present a problem: The data being used to “teach” the software systems is embedded with bias, and only serves to reinforce inequality.
From phys.org:
The available information suggests that Oasys is calibrated to predict risk. The algorithms consume the data probation officers obtain during interviews and information in self-assessment questionnaires completed by the person in question. That data is then used to score a set of risk factors (criminogenic needs). According to the designers, scientific studies indicate that these needs are linked to risks of reoffending.
The risk factors include static (unchangeable) things such as criminal history and age. But they also comprise dynamic (changeable) factors. In Oasys, dynamic factors include: accommodation, employability, relationships, lifestyle, drugs misuse, alcohol misuse, thinking and behavior, and attitudes. Different weights are assigned to different risk factors as some factors are said to have greater or lesser predictive ability.
So what type of data is obtained from the person being risk assessed? Oasys has 12 sections. Two sections concern criminal history and the current offense. The other ten address areas related to needs and risk. Probation officers use discretion in scoring many of the dynamic risk factors.
Back in 2020, the ACLU sued Clearview AI for its use by law enforcement in California. From cnn.com:
Clearview AI, the controversial firm behind facial-recognition software used by law enforcement, is being sued in California by two immigrants’ rights groups to stop the company’s surveillance technology from proliferating in the state.
The complaint, which was filed Tuesday in California Superior Court in Alameda County, alleges Clearview AI’s software is still used by state and federal law enforcement to identify individuals even though several California cities have banned government use of facial recognition technology.
The lawsuit was filed by Mijente, NorCal Resist, and four individuals who identify as political activists. The suit alleges Clearview AI’s database of images violates the privacy rights of people in California broadly and that the company’s “mass surveillance technology disproportionately harms immigrants and communities of color.”
The ACLU won its case. From aclu.org:
Under a legal settlement filed in court today, Clearview AI — a secretive face surveillance company claiming to have captured more than 10 billion faceprints from peoples’ online photos across the globe — has agreed to a new set of restrictions that ensure the company is in alignment with the Illinois Biometric Information Privacy Act (BIPA), a groundbreaking Illinois privacy law.
The central provision of the settlement restricts Clearview from selling its faceprint database not just in Illinois, but across the United States. Among the provisions in the binding settlement, which will become final when approved by the court, Clearview is permanently banned, nationwide, from making its faceprint database available to most businesses and other private entities. The company will also cease selling access to its database to any entity in Illinois, including state and local police, for five years.
4 thoughts on “AI Becoming Precrime Tool for Law Enforcement in America and UK”