The recent announcement from IBM to withdraw from all research, development, and offerings of facial recognition will not stop facial recognition from being used by law enforcement or government entities. There. I said it. Facial recognition will continue on its gray area trajectory with or without IBM. But what IBM has done, specifically Arvind Krishna, is bringing attention to a growing concern that needs far more national and global attention. The use of facial recognition needs to be scrutinized for bias and privacy concerns. It needs oversight. It needs guardrails. Usage, especially from law enforcement and governing entities, needs to be transparent. And frankly, the technology needs to be better for it to work the way people envision.
IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” – Arvind Krishna
While facial recognition has come a long way over the last few years, bias is heavily present. Whether it be age, sex, race, or ethnicity, what benefits the detection of one “type” of person, hurts another. There have been numerous examples of facial recognition struggling to properly identify darker completions. One example highlights the ability of a photo algorithm to detect white males with a near perfect degree of accuracy, but the algorithm struggles to properly identify women of color, specifically on identifying them to be women. In a recent study, the National Institute of Standards and Technology (NIST) found that virtually all facial recognition showed signed of bias, especially in accurately identifying African American, Asian, and Native American faces. This is a problem. A big problem. And it’s not only racial bias. This example highlights just blatant inaccuracy from the same demographic, where members of congress were falsely matched to mugshots (insert joke here).
So where do major technology vendors stand today on facial recognition? It’s a mixed bag. Some have scaled back or downright stopped offering the technology, some fall in the middle, while others pushed the limit and are paying the consequences. Google has supported a temporary ban on facial recognition services. Microsoft, well not supporting a ban, has prevented entities from using its technology over concerns for ethical usage. AWS Rekognition, which has provided facial recognition technology to law enforcement in the past, has implemented a one-year pause on allowing police to use the technology in hopes that congress will make progress in that time to implement appropriate rules. Axon, a body camera supplier to law enforcement has repeatedly said it will not add facial recognition. Facebook is settling a class-action lawsuit over its unlawful usage of facial recognition. And Clearview AI is currently in a world of hurt, as its facial recognition tool of 3 billion images from social media sites used across the private sector and government entities has numerous lawsuits on its doorstep in association with violating privacy laws.
And then there is IBM. I truly commend Krishna’s actions as I hope it sparks a conversation that is needed now more than ever, but it should be noted that IBM’s initial approach was part of the problem. In the past few years, IBM has released open data sets to help with training facial recognition models. The latest iteration of that was just last year, where they scraped Flickr, an online photo management and sharing application, for close to a million photos that were made available to researchers. And when it comes to law enforcement, I have not seen evidence that IBM facial recognition technology has ever been used by law enforcement in any degree today.
Krishna’s letter to congress highlights topics like accountability and responsibility in association with facial recognition when used by law enforcement. Aside from some of the entities that continue to offer the tech, or use this type of tech today, most people I’ve spoken with agree on the need for a national dialogue on the topic. And it should be noted that associating the need for dialogue based solely on facial recognition is barely scratching the surface. While privacy, transparency, and responsibility is a growing concern with facial recognition usage, it needs to be part of a larger dialogue with AI. And the top technology players need to be the leaders in this dialogue. They need to ban together. They need to push a collective agenda. And they need to do it now.