News Stay informed about the latest enterprise technology news and product updates.

Algorithmic discrimination: A coming storm for security?

“If you don’t understand algorithmic discrimination, then you don’t understand discrimination in the 21st century.”

Bruce Schneier’s words, which came at the end of his wide-ranging session at RSA Conference last week, continued to echo in my ears long after I returned from San Francisco. Schneier, the well-known security expert, author and CTO of IBM Resilient, was discussing how technologists can become more involved in government policy, and he advocated for joint computer science-law programs in higher education.

“I think that’s very important. Right now, if you have a computer science-law degree, then you become a patent attorney,” he said. “Yes, it makes you a lot of money, but it would be great if you could work for the ACLU, the Sothern Poverty Law Center and the NAACP.”

Those organizations, he argued, need technologists that understand algorithmic discrimination. And given some recent events, it’s hard to argue with Schneier’s point. But with all of the talk at RSA Conference this year about the value of machine learning and artificial intelligence, just as in previous years, I wondered if the security industry truly does understand the dangers of bias and discrimination, and what kind of problems will come to the surface if it doesn’t.

Inside the confines of the Moscone Center, algorithms were viewed with almost complete optimism and positivity. Algorithms, we’re told, will help save time and money for enterprises that can’t find enough skilled infosec professionals to fill their ranks.

But when you step outside the infosec sphere, it’s a different story. We’re told how algorithms, in fact, won’t save us from vicious conspiracy theories and misinformation, or hate speech and online harassment, or any number of other negative factors afflicting our digital lives.

If there are any reservations about machine learning and AI, they are generally limited to a few areas such as improper training of AI models or how those models are being used by threat actors to aid cyberattacks. But there’s another issue to consider: how algorithmic discrimination and bias could negatively impact these models.

This isn’t to say that algorithmic discrimination will necessarily afflict cybersecurity technology in a way that reveals racial or gender bias. But for an industry that so often misses the mark on the most dangerous vulnerabilities and persistent yet preventable threats, it’s hard to believe infosec’s own inherent biases won’t somehow be reflected in the machine learning and AI-based products that are now dominating the space.

Will these products discriminate against certain risks over more pressing ones? Will algorithms be designed to prioritize certain types of data and threat intelligence at the expense of others, leading to data discrimination? It’s also not hard to imagine racial and ethnic bias creeping into security products with algorithms that demonstrate a predisposition toward certain languages and regions (Russian and Eastern Europe, for example). How long will it take for threat actors to pick up on those biases and exploit them?

It’s important to note that in many cases outside the infosec industry, the algorithmic havoc is wreaked not by masterful black hats and evil geniuses but by your average internet trolls and miscreants. They simply spent enough time studying how, for example, YouTube functions on a day-to-day basis and flooded the systems with content to figure out how they could weaponize search engine optimization.

If Google can’t construct algorithms to root out YouTube trolls and prevent harassers from abusing the sites’ search and referral features, then why do we in the infosec industry believe that algorithms will be able to detect and resolve even the low-hanging fruit that afflicts so many organizations?

The question isn’t whether the algorithms will be flawed. These machine learning and AI systems are built by humans, and flaws come with the territory. The question is whether they will be – unintentionally or purposefully – biased, and if those biases will be fixed or reinforced as the systems learn and grow.

The world is full of examples of algorithms gone wrong or nefarious actors gaming systems to their advantage. It would be foolish to think infosec will somehow be exempt.