Getty Images
Could Artificial Intelligence Transform Healthcare Cybersecurity?
Using artificial intelligence tools in healthcare cybersecurity may help to streamline threat detection efforts, ensure compliance, and mitigate human error.
Artificial intelligence (AI) may be able to close gaps and mitigate risks in the healthcare cybersecurity space, a new report by The Economist Intelligence Unit sponsored by law firm Pillsbury Winthrop Shaw Pittman LLP suggested.
The report explored the promising possibilities and significant challenges that come with using artificial intelligence tools to bolster cybersecurity efforts. AI in cybersecurity is projected to increase at a Compound Annual Growth Rate (CAGR) of 23.66 percent from 2020 to 2027.
As data breaches increase across a variety of industries, businesses are desperately searching for tools that can protect sensitive data and prevent ransomware attacks.
“In integrating AI technologies with cybersecurity programs and systems, businesses across sectors have an invaluable opportunity to address one of the most complicated and potentially damaging risk factors organizations face today,” the report stated.
AI works 24 hours per day, 7 days per week. The constant monitoring means that it can process large quantities of data and detect threats quickly.
“AI is the only way you can really solve those big complex data problems,” Johan Gerber, executive vice president of security & cyber innovation at MasterCard, explained in the report.
AI-driven programs can collect valuable data that can help cybersecurity experts garner new insights about cybercriminal behavior. However, hackers have access to AI as well.
“AI can therefore serve to strengthen and weaken security simultaneously, depending on whether it is used as an offensive or defensive tool,” the report continued.
Hackers may manipulate AI to orchestrate a cyberattack. In addition, the report suggested that AI may be advancing too fast without proper rules and regulations to reign it in.
“There’s a shocking lack of industry best practices or regulations to ensure that those AI systems are actually reliable, robust, transparent and free of bias,” Jessica Newman, research fellow at UC Berkeley’s Center for Long-Term Cybersecurity, explained in the report.
“We are increasing the complexity of a good portion of the systems that we rely upon across industries, without adequate insight into how those AI systems are making decisions and whether they should be trusted.”
In many ways, AI is an inevitably growing force in the digital world that has the ability to revolutionize cybersecurity for the better. In other ways, its weaknesses may enable even more sophisticated cyberattacks and expose new security vulnerabilities.
There are also significant ethical considerations that businesses must take into account. Specifically in healthcare, organizations must make sure than the technologies they use and the vendors they work with are in compliance with HIPAA.
It is also crucial to recognize that while AI can do many things, humans are still essential to cybersecurity.
“AI has led to a fear over job redundancies generally, and the same logic may well apply to its role in cybersecurity as detection of anomalies can be automated. But humans still play a large role in the equation as they implement and supplement systems,” the report reasoned.
Some experts argued that AI implementation, education, rulemaking, and training will actually create more jobs.
Using AI-powered tools in cybersecurity is a risk, but one that might pay off in the long run. However, for critical infrastructure agencies and healthcare, those risks may be too great.
“The current situation is pretty terrifying,” Newman continued.
“It is tempting to add machine learning tools into many business processes, and it is indeed becoming ubiquitous, but AI tools suffer from bias, vulnerability to attack, and a lack of explainability. Without proper governance and oversight, we are simply exposing industry, people, and the environment to significantly more risk.”