Microsoft CEO Brad Smith declared during an online Washington Post event today that his company “will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights that will govern this technology.” This follows Amazon’s announcement yesterday that it is implementing a one-year moratorium on police use of the company’s controversial Rekognition facial recognition platform. Amazon says it hopes Congress will step up during this pause to implement appropriate rules for facial recognition technology.
These moves follow IBM CEO Arvind Krishna’s letter to Congress announcing that his company is getting out the facial recognition business. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values,” Krishna writes. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
The digital rights group Fight for the Future has issued a statement calling Amazon and Microsoft’s moves “essentially a public relations stunt.” The companies’ researchers will “spend the next year ‘improving’ the accuracy of their facial recognition algorithms, making it even more effective as an Orwellian surveillance tool,” the group warned. “The reality is that facial recognition technology is too dangerous to be used at all….Congress should act immediately to ban facial recognition for all surveillance purposes.”
While Microsoft and Amazon express their pious hopes that Congress might soon pass comprehensive legislation to “implement appropriate rules” for deploying and using facial recognition technology, a quick scan of the relevant introduced bills finds little pending action on the issue.
Although Congress continues to dawdle over facial recognition legislation, the California legislature is considering a bill that a coalition of civil liberties organizations led by the American Civil Liberties Union (ACLU) warns would “legitimize the widespread use of harmful and unnecessary facial recognition on the public.”
In a letter to the legislature, the civil liberties groups argue that the California bill “allows governments to identify, locate, and track people using facial recognition, a technology that gives governments the unprecedented power to spy on us wherever we are—identifying us at protests, doctor’s appointments, political rallies, places of worship, and more.” Moreover, the legislation would undercut the outright bans on police use of facial recognition that have been adopted by several California cities, including San Francisco, Oakland, and Berkeley.
Meanwhile the secretive surveillance company Clearview AI is taking an entirely different tack—it is jettisoning all of its private business clients and will sell its facial recognition services only to law enforcement agencies. Clearview AI has created an app that enables police to match photographs to its database of over 3 billion photos scraped from millions of public websites including Facebook, YouTube, Twitter, Instagram, and Venmo.
In a 2019 FAQ no longer available to the public through the Clearview AI site, the company claimed that it
has the most accurate facial identification software in the world, with a 98.6% accuracy rate. This does not mean that you will get matches for 98.6% your searches, but you will almost never get a false positive. You will either get a correct match or no results. We have a 30-60% hit rate, but we are adding hundreds of millions of new faces every month and expect to get to 80% by the end of 2019.
Earlier this month the ACLU filed a lawsuit in Illinois alleging that Clearview AI has violated the state’s Biometric Information Privacy Act that forbids companies from using a resident’s face scans without their consent. In its court filings, the ACLU says that Clearview AI’s business model “appears to embody the nightmare scenario” of “a private company capturing untold quantities of biometric data for purposes of surveillance and tracking without notice to the individuals affected, much less their consent.”
As demonstrations over the police slaying of George Floyd erupted across the country, Sen. Edward Markey (D–Mass.) sent a letter to Clearview AI asking if the company’s facial recognition app is currently being used by police to identify protestors.
“As demonstrators across the country exercise their First Amendment rights by protesting racial injustice, it is important that law enforcement does not use technological tools to stifle free speech or endanger the public,” wrote Markey. “The prospect of such omnipresent surveillance also runs the risk of deterring Americans from speaking out against injustice for fear of being permanently included law enforcement databases.”
“Facial recognition is the perfect tool for oppression,” warn Woodrow Hartzog, a professor of law and computer science at Northeastern University, and Evan Selinger, a philosopher at the Rochester Institute of Technology. The technology, they add, is “the most uniquely dangerous surveillance mechanism ever invented.”
These temporary moratoria by Microsoft and Amazon are wholly inadequate responses to the dangers this technology poses to civil liberties. Both companies should join IBM in entirely abandoning a technology that the national security state is sure to abuse.