1 0
Facial Recognition and Police: The Triumph of Data over Expertise - Metaphors Are Lies

Facial Recognition and Police: The Triumph of Data over Expertise

Read Time:5 Minute, 29 Second

Police, collectively, are bad at their jobs.

This is not a statement of opinions, but a fact. Their budgets have balloon since the George Floyd protests of 2020 (there is no record of any city anywhere in the country lowering its police budget). They have continued to kill Americans at a record rate. Their clearance records (something that generally measures only arrests, not convictions, and so is already skewed to make the police look better) for major crimes is at or near record lows. When they are present, as Uvalde and NYC show, they often don’t prevent the crime in progress. There are lots of reasons for this — mistrust among the community, training that emphasizes their role as warriors rather than peace officers, having to spend time doing work better handled by non-police officers, such as dealing with mental health crisis, etc. — but the fact remains, we aren’t getting our money’s worth from the police. Which is why their reliance on poor facial recognition systems is so interesting.

The New Yorker has a deep dive into police use of facial recognition technology and it demonstrates, as other stories have, that the police rely on this technology and ignore other basic police functions. The hook is the story of a man whose life was ruined by an arrest based entirely upon facial recognition software despite the fact that the briefest of investigations showed that the arrested man simply was not the assailant the police were looking for. The article shows that several other people have been damaged in a similar way, that police do not reveal how often the searches are used, and that the accuracy claims of several of the largest providers of facial recognition software are effectively lies. And yet, despite its obvious problems and failings, municipalities and police remain enthusiastic users of the technology. I believe this is because the tech lets them replace hard to master expertise with easy-to-use data.

Facial recognition software is easier than police work. Feed in a picture and voila, out pops a suspect who MUST be the criminal because the software is “100% accurate” to quote one of the providers. Easy peasy.

Learning and applying investigative techniques, on the other hand, is difficult. The results can be uncertain, and you have to work harder to prove your case, and such techniques can be hard to teach. As we have seen, the police are bad at teaching effective crime control and investigations already. In fact, in the story used for the article’s hook, the police did some desultory investigations and largely found that the suspect’s alibi checked out and no other incriminating evidence. But they relied on the facial recognition results anyway. Why? Because, I think, it is easier to rely on the machine than on human expertise.

Most of the imitative AI systems and algorithms we have today are bad at their jobs. They write terrible, tell lies, plagiarize, produce art that has weird deficiencies in it, looks generally the same, and contains the watermarks of the companies whose material was added to the training sets. They perpetuate discrimination in health care, they wrongfully deny insurance services, and they deny welfare benefits in a discriminatory manner. But we keep using them. Why? They are easy and cheap.

It is easier to use ChatGPT than learn to write for yourself. It is easier to use an imitative A art program than to pick up a pen and sketch. It is easy to claim that the machine rejected the health insurance claim than to take responsibility for your own decisions. It is easier to fall back on a faulty facial recognition decision than to revamp how you approach investigations and policing. That is the true value of imitative AI and other algorithms to those with the power to impose them: they let us replace hard to learn and hard to deploy human expertise with relatively cheaper data processing.

Ironically, most of these systems are not, in the long term, likely to be money savers. Lawsuits and the reputational damage done to places that use or misuse these systems have real costs that are not reflected in the initial purchase prices or operational budgets. Expertise is both more useful than these systems and generally, in the long term, less costly.

But expertise has upfront costs. You have to acquire it and that usually takes time and the results with any individual human being are uncertain. Data processing and imitative AI systems can help acquire or guide expertise, but effectively creating such human-system combinations is not as fast or initially cheap as plugging in an algorithm or two.

And that is the danger of imitative AI and other algorithmic systems. It is not that these cannot be helpful if used correctly or that they will eventually grow smarter than us and keep us pets. It is that every incentive in a capitalistic world point towards using these systems as a replacement for expertise rather than a helper for experts. And that inevitably results in worse outcomes for the people subjected to these systems, longer terms costs, economic ruin for actual experts, and a degradation of available expertise (after all, if there is no money in being an expert, who will go through the effort? And if there are no experts, who will the algorithms learn from?).

We have to do better. We must insist that these systems not be allowed as replacements for experts and that these systems be tools, not decision makers. No one should be allowed, for example, to use a facial recognition system without training, as seven national law enforcement agencies do today. Given how poorly these systems perform in general, you could make a strong argument that no police should use facial recognition software at all. We must value skill and expertise, through laws and regulations if necessary, before we are all subject to nothing more than rule by crappy algorithm.

Two percent of criminal cases go to trial. The rest end up in plea bargains where the suspects accept a lesser sentence in exchange for foregoing the ability to contest the charges. In the article, the man falsely identified said he would have taken a plea if his wife had not been able to get the identification overturned. How many others have plead guilty to crimes that did not commit because the police have replaced investigations with data processing? A world where that question can be asked is not a world in which any of should want to live.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.