Policy debates over facial recognition are still wide-eyed and emotional, but a recent conference keynote shows that some government officials are just confident enough to decide which areas not to focus on.
That is a significant step in the evolution of AI regulation. While in developed economies a pro-regulation consensus is building, one that includes top executives of Microsoft, IBM and Amazon, efforts uniformly remain unfocused and reactive.
The missing component required for public acceptance of facial recognition is trust, and, to date, government missteps and vacillation have engendered little of it. Any sure-footedness is encouraging.
Three privacy officials, from the European Union, the UK and Canada, discussed their agencies’ facial recognition priorities on a panel during the International Association of Privacy Professionals’ annual summit this month.
After highlighting two black eyes for the facial recognition industry — the ongoing Clearview AI saga and the equally brazen case of commercial biometric spying in Canada — participants turned to policy.
One speaker, Wojciech Wiewiórowski, European data protection supervisor for the EU, offered a sentiment that might result in a public stoning in some communities.
Wiewiórowski said data violations as such are not the trigger for many of the interventions being undertaken by his office. In one case, a bank used facial recognition technology for emotion recognition in monitoring its unsuspecting tellers. That is a new context for the technology that is needlessly intrusive.
Indeed, context is becoming a useful differentiator for regulators as they become more experienced with the technology and people’s reaction to it.
Someone walking in a town square might tolerate a number of cameras, said James Dipple-Johnstone, deputy commissioner of operations in the UK Information Commissioner’s office.
He said that another person walking in a residential neighborhood might rankle at knowing half as many cameras were present.
Taking another controversial stand, Wiewiórowski said that accuracy and bias in algorithms is a problem, but it is not his office’s top priority to resolve.
Everyone involved with AI wants it to be accurate. Even as a weapon, AI must be reliable. But government trying to improve code would be a waste of resources and redundant.
“We are not going to achieve this,” he said of EU officials.
Similarly, a general ban on the use of facial recognition systems for identification purposes makes more sense than a ban on all uses of the technology, according to Wiewiórowski.
One-to-many comparison of templates is problematic compared to one-to-one verification, as in the use of biometric passports, which has legal standing in EU government and the governments of member states.
Daniel Therrien, Canada’s privacy commissioner, also said focus and nuance when investigating situations generally is a better approach at this point in the history of facial recognition.
Therrien said his office has an open investigation into the Royal Canadian Mounted Police using Clearview AI, something that is illegal in the country. As investigators look into the matter, his office is offering the Mounties guidance for using facial recognition in privacy-sensitive ways.
AI | biometric identification | biometrics | data protection | facial recognition | identity verification | privacy | regulation