KOHb - Getty Images

Guest Post

How passwordless helps guard against AI-enhanced attacks

With all the potential of generative AI comes a major downfall: malicious actors using it in attacks. Shifting from password-based authentication can help solve the challenge.

Phishing emails used to be somewhat easy to spot. Poor grammar, misspelled words and odd spacing were telltale signs of a phishing message, helping employees either ignore it or report it to their internal IT team. But, in the age of generative AI, phishing scams -- already responsible for 90% of data breaches, per CISA -- are becoming even more convincing and humanlike.

The effectiveness of phishing hinges on believability, with more convincing attacks increasing the likelihood victims divulge sensitive information. AI-enhanced phishing techniques have the potential to devastate enterprises. Phishing campaigns that are more targeted and include a personal element, such as a phone call, are more than three times as successful, with a click rate of more than 53%, according to AAG, an IT services provider.

A 2023 FIDO Alliance study found that 54% of global consumers have seen an increase in suspicious messages and scams, while 52% believe such messages have become more sophisticated.

Read on to learn why generative AI is a particularly attractive tool for cybercriminals seeking to infiltrate enterprises, how it is making phishing harder to beat and what tools security leaders should harness to combat attacks in the age of generative AI.

Enhanced phishing with generative AI

Motivated by a range of goals, cybercriminals infiltrate corporate systems to steal sensitive data, extort money, disrupt operations or gain competitive advantage. They might also aim to seize control of the systems to exploit them for their own financial gain, such as with ransomware attacks. Alternatively, their focus could be on exfiltrating sensitive internal data, such as customer and employee information, and then demanding a ransom from the company in exchange for its return, as in extortionware attacks.

Generative AI has provided an opportunity for cybercriminals to perfect the art of phishing. Cybercriminals use tools such as ChatGPT for nefarious purposes and have adapted iterations, including FraudGPT and WormGPT, specifically for cybercrime uses that go far beyond the technology's intended purpose. For example, these generative AI tools can help develop business email compromise attacks by creating highly convincing phishing emails and phishing websites. With generative AI, crafting compelling narratives that entice victims to click on malicious links is a breeze, requiring just a few simple prompts fed to the AI generator.

Interestingly, one use for generative AI should be of less concern for organizations in 2024. While AI's ability to create deepfakes -- manipulated video, photo or audio recordings -- and AI-powered malware has created some sensational headlines, these fraud techniques will account for relatively few data breaches in 2024 as they are incredibly difficult to execute at scale. Rather, the AI enhancement of existing phishing techniques poses a greater threat.

Social engineering is already the cause of most attacks. Seventy-four percent of breaches are caused by human error, privilege misuse, use of stolen credentials or social engineering, according to Verizon's "2023 Data Breach Investigations Report." Now, any fraudster, anywhere in the world, can generate word-perfect phishing attacks that are near-impossible to detect -- at a fraction of the effort of creating a deepfake -- thanks to generative AI.

Why generative AI phishing is so hard to beat

One of generative AI's unique capabilities is imitating human language. This means that, as phishing scams become better engineered and more believable, many companies' established protocols for training employees to identify social engineering attacks might no longer work. Employees who are trained to identify misspellings or poor grammar might fail to identify new kinds of phishing attacks.

Traditional forms of MFA, including those that rely on knowledge-based secrets and one-time passwords, are also no longer sufficient backstops to phishing attacks. Generative AI fuels phishing attacks, tricking employees into surrendering credentials for legacy MFA bypass.

How to guard against AI-enhanced attacks

In the face of evolving threats, what steps can CISOs and security professionals take to fortify their defenses?

Organizations should increasingly prioritize security initiatives that mitigate the risks associated with inevitable human error -- and these must come from the C-suite. Since the notion of eliminating human mistakes is unrealistic, the task of securing company data should not be placed exclusively on the shoulders of individual employees. Data is the lifeblood of our digital world. We need a revolution in security to ensure its continued flow and protection.

At the heart of these initiatives should be a commitment to transition away from passwords and other forms of knowledge-based credentials in favor of passkeys, which are a phishing-resistant and user-friendly alternative. Passkeys are digital credentials that are a true password replacement and represent a new paradigm for user authentication. Passkeys seamlessly integrate cryptography with on-device biometrics or PINs, enabling users to log in to systems with greater security and usability -- and without inadvertently exposing their credentials to scammers and hackers.

Industry giants, including Google, Apple, Amazon and Microsoft, are embracing passkeys in their platforms, which are built on open standards from the FIDO Alliance and World Wide Web Consortium Web Authentication communities. Google recently announced passkeys are available for all its users to move away from passwords and two-step verification, as has Apple. Windows 10 and 11 have long supported device-bound passkeys in Windows Hello. Users can also use passkeys from iOS or Android devices to sign into sites in Chrome or Edge on Windows.

The sophisticated deception of AI-powered phishing attacks necessitates a multilayered approach to cybersecurity, safeguarding both enterprise and client data. Passkeys present companies with a readily available defense against phishing that can be used equally by employees and customers. Inaction only exacerbates existing vulnerabilities and paves the way for more insidious cyber assaults.

Andrew Shikiar is executive director and chief marketing officer at FIDO Alliance, a global consortium working to create open standards and an interoperable ecosystem for simpler, stronger user authentication. He has deep experience in multistakeholder organizations, having previously led market development efforts for Tizen Association, LiMo Foundation and Liberty Alliance Project. He also helped structure and launch groups such as the Smart TV Alliance and Open Visual Communications Consortia.

Dig Deeper on Threats and vulnerabilities