kentoh - Fotolia
How to mitigate SSD vulnerabilities
SSDs could pose a security risk for organizations that aren't careful when they decommission drives. Learn what you need to know about keeping data stored on SSDs safe.
A Carnegie Mellon University report revealed some SSD vulnerabilities. Industry experts' reactions to the report have been mixed, but it's not the only canary in the coal mine. SSDs -- especially decommissioned ones -- could hide other vulnerabilities, leaving IT teams to deal with any potential risks.
Until something better comes along, SSDs are here to stay, especially as prices drop and densities grow. The performance gains are simply too great to pass up. Fortunately, there have been relatively few risks up to this point, at least when compared to other types of vulnerabilities. Enterprise SSD vendors have a great deal at stake to ensure that their products can be trusted.
Even so, IT teams are the ones ultimately responsible for protecting an organization's sensitive data. They must ensure that the SSDs they've already implemented or plan to implement can meet their security requirements now and in the foreseeable future. They must also properly remove all data from those SSDs when it comes time to decommission them.
As with any technology, perhaps the greatest SSD vulnerabilities are the unknown risks yet to come -- all the more reason to stay diligent about protecting SSDs and their data.
NAND vulnerabilities affect SSD security
The February 2017 Carnegie Mellon University report, "Vulnerabilities in MLC NAND Flash Memory Programming," is the result of a joint research effort with Seagate Technology and the Swiss Federal Institute of Technology in Zurich, Switzerland. The report describes vulnerabilities in multi-level cell (MLC) NAND flash drives that could allow a malicious application to corrupt and change data that belongs to other programs and shorten the SSD's expected lifespan.
One way the application can exploit vulnerabilities is to induce significant amounts of program interference onto a flash page that belongs to another application, thus corrupting that page. To carry this out, the application writes data in a specific pattern -- what the report calls a "worst-case pattern" --which introduces errors into neighboring flash cells.
A malicious application can also induce read disturbances onto flash pages belonging to other applications. In this scenario, the application performs a large number of read operations in a short timeframe, inducing disturbances that can corrupt partially written pages and pages not yet written.
The technology behind these exploits is far more complex than described here, but the idea is clear. There are SSD vulnerabilities in MLC flash drives, and they likely also exist in triple-level cell (TLC) drives, given that they share similar programming processes (although the Carnegie Mellon paper does not address TLC drives specifically). On the other hand, it appears that these particular exploits do not work against 3D NAND drives as they're currently implemented.
The Carnegie Mellon paper also provides several suggestions for mitigating the SSD vulnerabilities, such as buffering data in the controller, accounting for threshold voltage changes and customizing the pass-through voltage for nonprogrammed and partially programmed flash cells. These are solutions that are up to the SSD vendors to carry out.
Risks are everywhere
Many in the industry question whether these are serious concerns, arguing that the most current SSD technologies already address these issues and that modern flash controllers and encryption techniques help to mitigate them. Others point out that the problem is not with the hardware itself, but with the supporting software, such as faulty storage algorithms or outdated firmware.
Still, IT teams must not become too complacent about their SSD security. For example, they might store sensitive data on MLC flash drives that are based on outdated technologies, or they might run SSDs that contain dynamic RAM chips that are susceptible to Rowhammer attacks, a type of attack that repeatedly reads data from specific rows in order to breach memory isolation.
All this points to the importance of IT teams using due diligence to ensure that their drives are safe and the supporting systems are current. A good place to start is with the firmware. Whether they're implementing new drives or maintaining existing ones, administrators should ensure that the firmware is always up to date. Many SSD vendors provide specific information and tools for updating the firmware, which can sometimes be a tricky process, although it's well worth the effort.
IT teams should also know how their drives have been designed and manufactured, especially if they're storing highly sensitive or classified data. For example, the team should determine whether a drive has been assembled in a country that poses a risk to security and whether the vendor can back up its security claims about the drive and firmware programming.
The manufacturing process makes it possible for subtle modifications to be introduced that can increase SSD vulnerabilities. Organizations that store highly sensitive or classified data should thoroughly understand how an SSD has been manufactured and programmed before making a purchase.
But purchasing and implementing SSDs are only part of the equation. In many cases, the greater security risks come from what a company does with its decommissioned drives.
Many organizations send their used drives to IT asset disposition vendors to erase the data or outsource the task to an IT security consultant, yet few of these organizations consider the third-party's ability to permanently remove the data.
Some of them might do nothing more than reformat the disks or deploy erasure techniques used for HDDs, approaches that fail to remove all sensitive data from SSDs. IT teams that fail to do due diligence when decommissioning their SDDs can put their entire organization at risk.