By Senior Fellow Ellen Broad from the 3A Institute.
Earlier this month another new dystopian-sounding facial recognition application hit the headlines. This time, it was a little-known startup, Clearview AI, which is providing identity-matching software to law enforcement agencies in the United States.
Stories about how facial recognition is being used by law enforcement aren't that surprising these days. But the Clearview AI revelations, published by The New York Times, made the tech industry sit up. Here was a company that, even in a world of increasingly invasive facial recognition applications, had crossed a line. They scraped the open web, collected billions of photos of people, and built an app enabling users to match their own pictures of a person with the photos in that vast database, and with links to pages where those photos appeared.
This kind of application - breathtaking in scale, deeply invasive in implementation - has long been technically possible; it just wasn't something technology companies were keen to do (or at least, to be seen to be doing).
Up until recently, conversations about facial recognition technology haven't usually gone much further than whether we should or shouldn't ban it. There has been no middle ground. Supporters are on the side of law and order, whatever that takes; opponents are radical leftists with a disregard for public safety or luddites opposed to technological progress. The many different choices made in designing and deploying the various tools and methods that fall under the umbrella of "facial recognition" - some of them sensible, others careless, some downright ugly - tend to get lost along the way.
Many things are technically possible. That doesn't make them safe, ethical or useful. It is technically possible to build a three-wheeled car - it just might keel over if you go round a bend at more than 40km/h. It's technically possible to manipulate software measuring carbon emissions in a car so that readings are artificially lowered, but that doesn't mean it's legally or socially permissible.
"It might lead to a dystopian future or something, but you can't ban it."
Clearview AI investor David Scalzo
Technologies are not monolithic. The design of every product rests on a range of choices and trade-offs. Some products are well-designed and conscious of their social and ecological footprints. Other products pose threats to physical safety, discriminate against people, or are designed to cheat. We need to think carefully about how we want technology to be applied - how we want it to be manifested in the world. Facial recognition is no different.
Clearview AI's facial recognition application wasn't just bad because it scraped billions of images of people without their knowledge or consent. If details of The New York Times's investigation are true, they went a lot further than that. They built software capable of monitoring who its users - mostly law enforcement agencies - were searching for. They manipulated image search results, and removed some matches. Images uploaded by police were stored on their own servers, with little verification of data security.
Are these things we want? Are these practices okay?
Clearview AI is just the latest in a long line of stories about buggy, inaccurate, invasive and outright offensive implementations of facial recognition. Face-detection settings on cameras that only work on certain faces. Image-tagging software making racist comparisons. Identity-matching databases used to investigate crime consistently misidentifying members of already marginalised groups. Software engineers matching women's faces with adult videos online, to help men check if their girlfriends had ever acted in porn.
This month European Union regulators indicated they were considering a potential ban on facial recognition technology for up to five years - with some exceptions - while they figure out the technology's impact, and the regulatory issues that need to be tackled. Google and Facebook have already expressed cautious support for such a ban.
Some cities have already started curtailing facial recognition; in San Francisco, the government voted in 2019 to ban local law enforcement from using the technology. In New York state, the education department demanded a school district cease using the technology in public schools.
Speaking to The New York Times, one investor in Clearview AI, David Scalzo, was doubtful about the power of any prohibition. Technology can't be banned, he said: "It might lead to a dystopian future or something, but you can't ban it."
It's true that a technology, once discovered, can't be undiscovered (though some have been forgotten). But throughout history, societies have temporarily banned the development or certain applications of technologies when it's uncertain whether they will do more harm than good: think nuclear power, or gene editing. Sometimes temporary bans become permanent ones. Sometimes they're lifted once we've used the breathing space to figure out the rules of engagement.
And yes, it's true that bans can be broken. But technologies don't break bans - people do. People who do not respect or recognise the concerns of the societies they live in.
Technologies do not lead us into a dystopian future: we decide the future we want.
This article was first published in Inside Story.