kras99 - stock.adobe.com
Corvus: Cyber insurance premiums see 'stabilization'
Corvus Insurance's Peter Hedberg provided insight into the cyber insurance landscape after a tumultuous 2023 and what enterprises can expect moving forward.
Following years of rising cyber insurance premiums, enterprises may finally start to see prices level out.
Throughout 2023 and into 2024, cyber insurers such as Corvus Insurance and Coalition reported surges in ransomware attacks, as well as spikes in claims related to certain products such as ConnectWise's ScreenConnect and Cisco's Adaptive Security Appliance. A rise in threat activity prompted CISA to relaunch its Cybersecurity Insurance and Data Analysis Group in November.
TechTarget Editorial spoke with Peter Hedberg, vice president of cyber underwriting at Corvus Insurance, about insurance trends, the most targeted sectors and recent attacks that will likely influence future underwriting processes. Despite an ongoing increase in attacks since 2023, Hedberg said policy requirements such as MFA are becoming more lenient for certain industries.
Editor's note: This interview was edited for clarity and length.
Have you observed any trends around premiums this year?
Peter Hedberg: They're certainly not going up the way they were in 2021, 2022. I would categorize it as stabilization. There is some competition; I think that's put a little pressure on them. Where I'm seeing the more dramatic changes is probably in underwriting. Carriers are changing the height of hygiene they require and maybe loosening up on that side. We won't return to the way it was before, but I think the new bar for a lot of [customer] classes is going to be MFA, zero trust network access and better backups.
There are a couple of segments where things have gotten more competitive, and part of that is because there's a perception that there won't be as many claims. Unfortunately, what I'm finding out is no one is immune to cyberattacks. The places where the claims are showing up are changing a little, too. They're not just ransomware. It's third-party biometrics. Some classes that we originally thought were pretty safe now have biometric exposure that we weren't expecting, like manufacturers that often use a thumbprint to key in and out.
Can you expand on the perception that there wouldn't be an increase in claims?
Hedberg: Things quieted down [when Russia invaded Ukraine] but now they're back and hacking again. What our intelligence community told us was that there was some level of reprieve, and now they're back and vicious -- 2023 was a big year for attacks. The difference is [threat actors'] success rate is not as high this year, and it improved a little bit last year, as well. A lot of that has to do with posture and hygiene. We haven't paid as much in ransoms, but there's still a lot of fire.
Activity is high, even though there are fewer ransom payments. The challenge with that is activity still costs us money. The good news is the bad guys don't make as much, but we're certainly still paying to restore businesses and that sort of thing.
Can you provide examples of how insurers are loosening up policy requirements?
Hedberg: Certain segments are seeing loosening. For endpoint detection and response (EDR), insurers used to say, 'I want to see it on everything.' Now they're saying, 'I want to see it on these classes.' And then for MFA, they've shifted to 'I want to see it on everything, but maybe I don't need to see it on privileged users for these particular classes.' There's more nuance. But, if somebody comes to me from healthcare and says 'I don't have MFA for remote access,' then that's not a business I want to quote right now.
What led insurers to lower requirements?
Hedberg: I think some of that is the benefit of perspective and being able to look through all of our claims and saying to ourselves, 'In this particular class, MFA wouldn't have changed anything, anyway.' Then you look at other classes, and lawyers are a great example. We used to regard lawyers as, 'Hey, they're not a huge target, and they're law firms, they should know and be aware of the risks.' No, they're target No. 1. They're target No. 1 because they have heavily subordinated cultures, so it's easy to exploit ignorance with lower-level employees while impersonating higher-level employees.
They have a legally established duty of confidentiality, so they have to pay a ransom. They have to prove they paid for a deletion [of stolen data]. We didn't think of them as a ripe target, but they are. Manufacturing didn't used to be a ripe target, but then hackers figured out they have a deep time sensitivity for what they do, so if you interrupt that it's a lot easier to extract a ransom payment. Then, all the sudden manufacturing went from a very low hazard class to a medium and a high almost overnight.
We have so much more data that is meaningful and relevant now. That's why I think you're seeing some equilibrium and some sobriety out there.
Is it dangerous that policy requirements are decreasing? Are you surprised by the MFA leniency since recent attacks, such as one against Snowflake database customers last month, leveraged enterprises' lack of MFA protection?
Peter HedbergVice president of cyber underwriting, Corvus Insurance
Hedberg: I don't think [policy leniencies] is going to lead to less claims. I also think insurance is a reactive product. It's reactively priced. It's hard to be actuarily prescient with this sort of product.
MFA in and of itself is going to be an issue. AI is going to render some multifactor authentication useless. Two years ago, we thought it was a panacea issue, but not so much now.
Speaking of AI, will that affect underwriting and policies?
Hedberg: It's not a hurricane yet, but we see it, and we know it's coming. I had this assumption as an underwriter a year ago, when ChatGPT hit mainstream and showed a lot of competence in how it was answering questions. I assumed it was going to be a hellfire. It's not. It has not been yet. That's not to say it hasn't been used or implemented. What makes it so complicated is that you use AI to detect and deal with [adversarial] AI. Depending on what the bad guys do with AI, that's going to influence what we do with AI and how to counter that. The rules of the game haven't changed, the tools have a little bit.
A lot of standard professions are using AI to shortcut manual labor. Attorneys will take several cases and ask ChatGPT to summarize them. There's nothing in a professional liability marketplace that says you can't do that, or that we won't cover you if you use AI -- we aren't there yet. But there have been instances where lawyers have filed legal documents to a courtroom and the judge has been like, 'None of these case references are real.' ChatGPT hallucinated. Theres a challenge with that.
Arielle Waldman is a news writer for TechTarget Editorial covering enterprise security.