The Misuse And Misunderstanding of AI by Law Enforcement and Public At Large

Last November, a young African American man, Randal Quran Reid, was pulled over by the state police in Georgia as he was driving into Atlanta. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. Reid had never been to Louisiana, let alone New Orleans. His protestations came to nothing, and he was in jail for six days as his family frantically spent thousands of dollars hiring lawyers in both Georgia and Louisiana to try to free him.

It emerged that the arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed “a credible source” had identified Reid as the culprit. The facial recognition match was incorrect, the case eventually fell apart and Reid was released.

He was lucky. He had the family and the resources to ferret out the truth. Millions of Americans would not have had such social and financial assets. Reid, though, is not the only victim of a false facial recognition match. The numbers are small, but so far all those arrested in the US after a false match have been black. Which is not surprising given that we know not only that the very design of facial recognition software makes it more difficult to correctly identify people of colour, but also that algorithms replicate the biases of the human world.

Reid’s case, and those of others like him, should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI, and how it needs resetting. There has long been an undercurrent of fear of the kind of world AI might create. Recent developments have turbocharged that fear and inserted it into public discussion. The release last year of version 3.5 of ChatGPT, and of version 4 this March, created awe and panic: awe at the chatbot’s facility in mimicking human language and panic over the possibilities for fakery, from student essays to news reports.

Then, two weeks ago, leading members of the tech community, including Sam Altman, the CEO of OpenAI, which makes ChatGPT, Demis Hassabis, CEO of Google DeepMind, and Geoffrey Hinton and Yoshua Bengio, often seen as the godfathers of modern AI, went further. They released a statement claiming that AI could herald the end of humanity. “Mitigating the risk of extinction from AI,” they warned, “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

If so many Silicon Valley honchos truly believe they are creating products as dangerous as they claim, why, one might wonder, do they continue spending billions of dollars building, developing and refining those products? It’s like a drug addict so dependent on his fix that he pleads for enforced rehab to wean him off the hard stuff. Parading their products as super-clever and super-powerful certainly helps massage the egos of tech entrepreneurs as well as boosting their bottom line. And yet AI is neither as clever nor as powerful as they would like us to believe. ChatGPT is supremely good at cutting and pasting text in a way that makes it seem almost human, but it has negligible understanding of the real world. It is, as one study put it, little more than a “stochastic parrot”.

Full Article

posted by f.sheikh

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.