andreusK - stock.adobe.com

The future of facial recognition after the Clearview AI data breach

The company that controversially scrapes data from social media sites for law enforcement clients announced a data breach. What does it mean for the future of facial recognition?

Isn't it ironic? Clearview AI, which has been surreptitiously collecting photos, names and other personal information from billions of social media users, recently had its client data stolen.

In all likelihood, it won't be long before people in cities across the globe find that their local law enforcement agencies have been testing and, in some cases, utilizing Clearview AI to identify wanted suspects via images and videos from public cameras and the internet. If the backlash is anything like that in New Jersey or London, it will bring the public awareness -- and mistrust -- of facial recognition to a whole new level. Yet, what effect will this breach have? When thinking about the overall ramifications for the future of facial recognition, there are a number of things to consider.

Is this the beginning of the end for facial recognition?

The short answer is no. While the Clearview AI breach might delay new facial recognition deployments -- and those of related technologies, such as body recognition or gait recognition -- by law enforcement, it won't slow adoption for defense and intelligence use cases, and it certainly won't delay its adoption in more authoritarian countries. Further, the technology is too useful and convenient to deter adoption across the spectrum of facial recognition use cases. For evidence, look at the rapid rise in the number of companies using facial recognition for badging into buildings, the popularity of facial recognition capabilities in mobile devices and the prevalence of "pay with your face" systems in China. And, if there is one thing that Facebook has shown, it's that customers are willing to forgo privacy for convenience.

While we can expect knee-jerk reactions that try to bar law enforcement from using facial recognition, most of this legislation will prove ineffective because it won't be able to distinguish new facial recognition technologies from earlier systems police departments have been using for decades. Plus, it won't have any effect on private sector use of facial recognition over the long term.

Will companies stop investing in facial recognition?

Many well-established companies will avoid investing in facial recognition, but it is the wrong thing to do. It is far better for companies such as Google, Microsoft, AWS and IBM to offer facial recognition, considering they have the capabilities to do it well and the reputational risk to ensure it is executed as ethically as possible versus companies, like Clearview AI, that operate in the dark until a scandal brings them to the public attention.

Facial recognition can be implemented in ways that both are ethical and protect privacy. Unfortunately, in this instance, Clearview AI did neither.

AI ethics advisory panels -- like that of Axon, formerly Taser, which recommended it not develop facial recognition models for its body cameras -- are effectively encouraging unethical behavior. Since anyone using body cameras will likely also seek facial recognition capabilities, such panels are pushing clients to seek these features from vendors with less reputation at stake and fewer scruples overall.

Can facial recognition be done ethically?

Facial recognition can be implemented in ways that both are ethical and protect privacy. Unfortunately, in this instance, Clearview AI did neither. It scraped social media sites to create a database of names and faces -- and, presumably, other personal information -- and offered this as a mechanism to help identify any individual from a photo. Not only is this unethical and fundamentally vulnerable from a data protection point of view, but it is also inaccurate as it leads to a high rate of false positives.

A better, safer approach -- and one that can be more effectively regulated -- is to use facial recognition models that recognize similar faces and maintain a small watchlist of only relevant names and faces by each individual law enforcement agency. This way, companies limit the data lost in a potential breach and provide oversight over the data that is collected and how it is used.

It's how you use facial recognition that matters

The Clearview AI data breach, at most, delays the inevitable: We will increasingly be identified and tracked using facial recognition, whether that be at home, at work or in public locations, ranging from streets to stores. The future of facial recognition comes with several benefits -- preventing crimes, health monitoring and a new generation of smart devices, smart homes and smart workplaces -- but it also comes with the potential for abuse at an extraordinary scale. Authoritarian regimes are already using these technologies to repress segments of their populations, and we should look to prevent abuses through regulation.

However, it is important to note that facial recognition itself is neither ethical nor unethical and, in some respects, open to less abuse than technologies like smartphones and social media. The best approach is to regulate what it is used for and require that everything from data collection to commercialization strategy is aligned to reasonably ethical outcomes. Regulating the use of specific data and methods -- like facial recognition -- is a fool's errand as it does little to deter unethical actors who can easily work around constraints, while scaring away the more morally upstanding who can help us define the ethical way.

Unfortunately, if regulatory attempts by cities like San Francisco and Somerville, Mass., are anything to go by, we are headed for the worst of both worlds.

Kjell CarlssonKjell Carlsson

Kjell Carlsson is a senior analyst at Forrester covering data science, AI and advanced analytics. Carlsson has more than 15 years of experience in driving strategic insights from data, across roles in strategy research, management consulting, big data, healthcare and cloud-based services. Most recently, he led the development of speech analytics, predictive analytics, network analysis and AI applications to drive sales performance at Athenahealth. Prior to this, he was a management consultant using data analysis for strategic recommendations at EMC, focusing on its data storage, disaster recovery and backup, and security businesses, and at Wilson Perumal & Co., focusing on the financial industry. Prior to his doctoral work, Carlsson conducted research on strategy, national competitiveness and healthcare for Michael Porter at the Institute for Strategy and Competitiveness.

Carlsson holds a Ph.D. in business economics from Harvard Business School, where he specialized in strategy, organizational economics and international business. As part of his dissertation, he developed a game theory model with Oliver Hart, the 2016 Nobel laureate in economics, and conducted the first large-sample analysis of global supply chain data to demonstrate the interplay of formal and relational contracting methods on global supply chains during the 2006-2008 financial crisis. His research is published in Advances in Strategic Management and Advances in International Management. He also holds an M.A. in economics from Harvard University and a B.A. in economics and computer science from Columbia University.

Dig Deeper on Identity and access management