Getty Images

Biometric technology like facial recognition is here to stay

Despite resistance, government agencies and private companies continue to use the technology. A key issue is whether people are informed about how their information will be used.

Recent opposition to the IRS' use of facial recognition technology, and the agency's subsequent decision to drop the technology, invoke questions of whether facial recognition systems will ever be fully accepted in the United States.

Activists and privacy experts often raise the question of bias in the algorithms connected to biometric technologies such as facial recognition. But like it or not, this technology is prevalent both in the private and government sectors.

Carey O'Connor Kolaja is the CEO of AU10TIX, an identity verification vendor based in Israel. Kolaja works in New York City. The vendor uses different forms of identification, including document verification and biometric authentication, to help users such as Google, Uber and PayPal prevent fraud.

In this Q&A, Kolaja discusses what went wrong with the IRS' implementation of facial recognition technology, the different trends in biometric authentication and ways to prevent bias in the AI and machine learning algorithms that underly these technologies.

Is the U.S. the only country resistant to facial recognition technology?

Carey O'Connor Kolaja Carey O'Connor Kolaja

Carey O'Connor Kolaja: I don't think the U.S. is the only country where citizens have resisted to share their personal information with the government or even with the private sector.

I hesitate saying biometrics because if you look at the stats as to right now, particularly in the U.S., something like 80% of the data breaches that have happened are reportedly due to passwords being compromised. That is what's given rise to biometrics as a way to verify oneself. [Up to] 85% of global consumers have used biometrics to verify they are who they are. And around the globe, it's even more profound than that. And so, when we look in the context of citizens not being comfortable, the stats would show otherwise.

The bigger question that's kind of underlying all this is … who are people comfortable or not comfortable using it with? That's when you start to ask yourself what the private sector is doing differently than the public sector.

Whether it's a facial signature, or it's a fingerprint signature, it's really about our data. It's about, how do you create access to things fairly while keeping security and privacy in mind, but also giving choice and control?

We shouldn't have to force somebody to make a choice between giving a biometric signature versus [not] getting unemployment or filing their taxes.
Carey O'Connor KolajaCEO, AU10TIX

That is one of the areas where I think things have gone wrong with our U.S. government in their desire to adopt new technologies -- which I agree is the right thing to do -- and to ensure safety. The implementation, I think, is where it fell short because we shouldn't have to force somebody to make a choice between giving a biometric signature versus [not] getting unemployment or filing their taxes. There should be other ways in which people can verify themselves if the choice they make is to not share a piece of biometric information.

Will education about facial recognition and biometric data make the public more comfortable?

O'Connor Kolaja: There's an entire area of publication and content missing around identity literacy, how you educate people, and who is responsible for educating people on what it means to keep your private information safe.

There's no ifs, ands or buts about it; we all share information about ourselves every time we get online, and in the physical world when you're swiping a card or now, when you're sharing your vaccination card to get into a restaurant in New York.

The bigger discussion we should be having is about what the responsibility is of people in the private and the public sector for disclosing and sharing with an end data subject, and how that information is being used.

There was a letter from some Democratic members of Congress about this IRS issue. I was really impressed with some of the questions that they provoked in this letter. It was around: What type of oversight does the government agency have once this information is shared? What happens to the data? Where is it stored? How can someone delete it if they wanted to?

By educating consumers and citizens about [their data], then they can make that choice about what and how they share.

What other trends are you seeing surrounding facial recognition and biometrics?

O'Connor Kolaja: The notion of needing a lot of data to verify that somebody is who they say are with a high level of assurance is being put under question. And the challenge to those of us in the industry and more broadly is, how can you obtain the least amount of information to get the highest level of assurance to really minimize the amount of PII [personally identifying information] that is shared?

The second thing that we're starting to see is that one type of verification tends not to be enough.

When those are compromised, what's next? And so, layering on different verification techniques that are contextual to what it is you're trying to do, I also believe is a big movement.

The third major trend that we're seeing is that tokenization is going to be the way of the future. Since the pandemic when you've had [an] increase in fraud, and … people connect online 2,800 times a day or something like that … there is a need for us to move more to what they call verifiable credentials. These allow an individual to access a tax filing, PayPal account or Airbnb account without sharing personal data and that ensures a high level of degree of assurance that that person is who they say they are.

What a verifiable credential is … a token that proves that you know something, or you have something, and it can be issued by anyone and verified by anyone, but your PII is not shared. In the world where we want to live more safely and security securely, that is critical.

And then I guess there's a fourth one which is really about control and choice. GDPR [General Data Protection Regulation] was a catalyst for this; the CCPA [California Consumer Privacy Act] laws in California were as well. As an individual, if I want to revoke access to information that I've shared with a logo or brand or merchant, I can do that. I do believe we're going to be seeing more and more of that. While the rights are there, the knowledge and education are not there and then the process is not as easy as it probably needs to be.

Is there a way to ensure that the algorithms and AI behind facial recognition can be fair?

O'Connor Kolaja: There are ways in which you ensure that the algorithms and artificial intelligence models are unbiased.

A model is always initially trained by people and by the data sets that are tagged and fed into the model. In that scenario, those who are tagging the data sets are those who are training the model should be diverse themselves.

In addition to that, there are mechanisms in place where you can test to ensure that models are diverse and there aren't coded biases.

The other way is you put governance and controls in place. For example, when we build our models, and we modify our models, we always do a pre-test and a post-test when releasing it around a set of data that we know are unbiased and to ensure that the efficacy of that result stays intact.

Technology's not perfect. Human beings aren't perfect, but there are steps that can be taken to ensure that these models are not making the wrong decisions.

Editor's note: This interview has been edited for clarity and conciseness.

Dig Deeper on Enterprise applications of AI